id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
141061549
pes2o/s2orc
v3-fos-license
The effectiveness of intimacy training with cognitive-behavioral approach on couples ’ life quality and happiness Objective: Current research aimed to study the effectiveness of intimacy training on couples’ quality of life and happiness. Method: The research method was semi experimental namely: pretest-posttest and follow up with witness group. The population was Kerman couples aged 18-26, and 50 couples with consideration of entrance criterion were selected and divided randomly. Oxford happiness and quality of life questionnaire were administered among selected sample. Results: The results showed intimacy training enhanced physical health, mental health, and social relationship of components of quality of life. Also, intimacy training enhanced life satisfaction, self-esteem, and mind satisfaction of components of happiness. Conclusion: With consideration of results it can be said intimacy training could be used in counseling clinics. INTRODUCTION Marriage is one of the most important events in human life, which has been consisting of two people with talents, abilities, needs and interests.Marriage is a complex, delicate and dynamic human relationship with specific characteristics (1,2) and one of the most important and highest social practices to meet human, emotional and spiritual needs (3). The family is also the first social organization created by this marriage and has existed since the onset of human life and is considered to be the basis of life and is essentially the center of help, reassurance, tranquility and healing.It relieves the suffering of its members and directs them to growth and prosperity (1,4). One of the damages that families face today is the establishment of non-principled and problematic relationships between couples and the coldness of relationships and, ultimately, an increase in divorce rates.Intimate is the most important element of marital life, if men and women cannot accurately identify each other, they can never be integrated into one single unit and cannot be integrated together (5). The need for intimacy and close connection with each other is one of the basic needs of each individual and is an important part of a marvelous marriage that, according to Sharer & Reis, is the result of an interpersonal experience (6,7,8), which its creation and preservation in marriage is a skill and art, which, in addition to mental health and healthy initial experiences, requires the acquisition of skills and performing special duties (9).Intimacy is the proximity, similarity and loving personal relationship with another that requires awareness, deep understanding, acceptance, and expression of thoughts and feelings. Intimacy is one of the key features of communication especially among couples that affects marital adjustment and mental health (such as reducing the risk of depression, increasing happiness and well-being) and provides a satisfying life for individuals (10,44).In contrast, intimacy is one of the most common causes of distress, and if people do not have the art of establishing this intimacy, the result is a conflict in the lives of individuals who, if these contradictions persist, may lead to a storm's life that even goes forward up to extinction of family foundation. According to the statistics office of Iran, during the last decade, the number of registered divorces has increased and has progressed over the past years (10,11), This, in turn, can dominate the quality of life, welfare and happiness, and generally the individual's satisfaction from life, but the important thing is that not only intimacy and communication skills can be addressed to individuals.It can prevent this phenomenon, but it even increases the factors such as the quality of life and happiness that are important psychological factors in the lives of each individual (6,12,13). The quality of life is defined as a cognitive judgment of life satisfaction.The World Health Organization has defined the quality of life as perceiving individuals from their present conditions according to their culture and their value system relationship with their goals, expectations, standards and concerns and their effects on physical and psychosocial conditions (14).Studies on quality of life have shown that quality of life is related to marital intimacy (15,16).Barzegar and Samani (17) and the training of the communication skills program (18,19,20,21,22,23); and training on the program for the enrichment of relations (24,25,26) on quality of life which has had an effect on the quality of life of individuals and has led to an improvement in the quality of life of individuals. Happiness means being happy and having a positive attitude in life and one of the most important psychological needs of a person, which also has a significant effect on the quality of human life.Happiness is recognized by the satisfaction of high life, positive emotions and low levels of negative emotions (Morrow-Howell, 2010; quoted by Meyzari Ali and the Dasht bozorgi, 2015).Happiness studies have shown that there is a relationship between communication skills and couples happiness (27), and couples whose happiness levels were higher in them were being reported high intimacy (28) they also suggested that the training of communicational skills program (29) is effective in increasing happiness. Therefore, considering that marriage is one of the most important stages in the life of each person, paying attention to the psychological issues of couples for the creation of a family and the subsequent healthy community is necessary, and this is dependent on the use of the couple's skills and abilities in this field, as well as Taking into account the existing gap between researches that have not been conducted solely in the field of in-service training, The main question of the present research is that: Does intimacy education affect the quality of life and the happiness of couples? SOCIETY, SAMPLE AND SAMPLING METHOD The methodology of the research is a quasi-experimental research from pre-test, post-test, and control group.The statistical population of this study consisted of married couples aged 20 to 27 years old in Kerman, in order to select the sample of about 3,000 couples who were introduced to undergraduate training centers for premarital training (of which 65% were in the desired age range), 50 couples were selected by random sampling method according to the criteria for entering the research and were randomly assigned to the experimental and control groups.The criteria for entry were: age range of 20 to 27 years old, having at least a diploma, no serious physical or mental illness.Exit criteria were: an unpleasant experience in the last few months (such as the death of a dear), the history of divorce / remarriage.TOOLS 1. Quality of Life Questionnaire: This questionnaire is used to measure the quality of life of a person in the last two weeks and has been developed by the WHO in 1989.This questionnaire has 26 items and 4 areas of physical {3, 4, 10, 15, 16, 17, 18}, social relationships {20, 21, 22}, environmental health {8, 9, 12, 13, 14, 23, 24, 25} can be psychological {5, 6, 7, 11, 19, 26}.The first two questions of this questionnaire do not fall into any of these subscales and are not in this range (30). The spectrum of this questionnaire is Likert type and ranks from 1 to 5 from the completely disagree to the completely agree.Nassiri (31) reported the reliability of this scale based on three retest methods with a three-week distance, halflife and Cronbach's alpha of 0.67, 0.87 and 0.84, respectively.Rahimi and Kheir (32) reported the Cronbach's alpha coefficient for the physical health dimension equal to 0.77, the psychological health dimension equal to 0.77, the dimension of the social relationship equal to 0.65 and the dimension of life environment have been reported equal to 0.77. 2. Oxford Happiness questionnaire: This scale has been developed by Argyll and Lev in 1989.This questionnaire has 29 items.The scale of the questionnaire is Likert type and ranked 0 to 3. The original version of this questionnaire has not sub-scale but the results of Hadinejad and Zarei's research (33) 33), Cronbach's alpha coefficient equal to 0.84, and the coefficient of retest equal to 0.78, Alipour and Noorbala (1998), Cronbach's alpha coefficient equal to 0.93, The two-half test reliability coefficient 0.92 and the reliability of the retest have been reported equal to 0.79 for this questionnaire.A questionnaire with the sub-scales of Ali Pour and Agah Harris has been used in this research. IMPLEMENTATION METHOD At first, 50 couples were selected according to the criteria for entering the research and available sampling method.In the next step, the subjects were randomly assigned to experimental and control groups and pre-test was performed on them.Then, the experimental group was trained for 10 sessions, 45 minutes under intimacy training with cognitivebehavioral approach, but no training was performed for the control group.In this research, intimacy training sessions were conducted based on the guidelines of Etemadi, Rezaei and Ahmadi (36).The post-test was taken after the end of the treatment plan.A summary of what was raised at the treatment sessions has been presented in Table 1. FINDINGS The descriptive statistics of happiness components by groups and type of test have been presented in Table 2. Multivariate covariance analysis should be used to examine the effectiveness of intimacy training on happiness.The covariance matrix is one of the assumptions of this analysis.The results of the box test showed that this hypothesis was established (BOX M=46.96,F=1.414, P>0.05).Another assumption of this analysis is the homogeneous of error variances.The results of the Leven test have been presented in Table 3. As shown in Table 3, in all components, the condition for equality of error variances is established (P> 0.05).Therefore, multivariate analysis of covariance was performed and the results showed a significant difference between the two groups in the linear composition of the variables (Effect size, 0.335, P <0.01, F=4.134, Lambdia Wilkes=0.665).Onedimensional covariance analysis was used to evaluate the patterns of difference in Table 4. As shown in Table 4, intimacy education has been affected the improving life satisfaction, self-esteem and satisfaction. The descriptive statistics of quality of life components by groups and type of test have been presented in Table 5. Multivariate covariance analysis should be used to evaluate the effectiveness of intimacy training on happiness.The covariance matrix is one of the assumptions of this analysis.The results of the test showed that this hypothesis was established (Box M=10.775,F=0.984, P>0.05).Another assumption of this analysis is homogeneous of the error variances.The results of the Leven test have been presented in Table 6. As shown in Table 7, in all components, the condition for equality of error variances is established (P> 0.05).Therefore, multivariate analysis of covariance was performed and the result showed a significant difference between the two groups in the linear composition of the variables (Effect size=0.197,P <0.05, F=2.645, Lambdia Wilkes=0.803).Univariate covariance analysis was used to evaluate the patterns of difference, in Table 7. As shown in Table 7, intimacy training has been affected the improving physical health, mental health, and social relationships. DISCUSSION AND CONCLUSION The purpose of this study was to investigate the effect of Intimacy training on couples' quality of life and happiness.The results showed that intimacy education has been effective on the quality of life and happiness of couples, and it promotes quality of life and happiness.This finding is implicitly based on the findings of Reza Zadeh Goli and Kiani ( 15 23); Amini and Heidari (24); Amini and Kaboli (25); Sabzevari and Gerami (26). The researchers concluded that quality of life has been linked to marital intimacy, and the training of communicative skills and education and programs on the enhancement of relationships on quality of life had an effect on the quality of life of individuals.The findings of this study are also consistent with Klein and Stafford (27), Sanhedia (28), Shayan and Ahmadi Gattat (29).These researchers concluded that there is a relationship between couples' communication skills and happiness, and couples with higher levels of happiness were being reported higher intimacy and also that communication skills training is effective on increasing happiness.Given that similar researches has not been done in the field of research, this problem has encountered difficulties in finding direct records and it cannot certainly possible to compare the findings of this study with other studies. In explaining these findings in general, it can be said that communication problems and the inability of couples to communicate properly and correctly are the most important factors in conflict and as a result of dissatisfaction and inconsistency and the decline of quality of life.Obviously, the equipping and awareness of couples with proper and correct communication skills can lead to greater satisfaction and compatibility of marital life (37).People in this program learn the skills that change their behavior and their husbands and eventually find the ability to create new lifestyles.With the help of these skills, people can send their messages more explicitly, and they can find a deeper understanding of each other.Since couples create this style by agreement, they learn over time to change their bad behavior and communication styles, add positive behaviors to their behavioral styles, and use these habits at conflict situations and improve their quality of life. The skills taught in this program, such as problem-solving, are biased and fairly capable of maintaining a careful atmosphere during the discussion and problem-solving, the ability to adopt an individual's perspective, the ability to reduce negative-negative interaction, and anger and ability to change in the expected behavior patterns of the individual.The empathy skills ready individuals for more compassionate understanding of the needs of the partner in terms of emotional needs, psychology and interpersonal needs, the tendencies of others, and the calling of honest, respectful and trustworthy behaviors, common sense, and sensory and supervisory behaviors and intimacy with others with more speed and amount. The skill of negotiating couples is taught to maintain a positive emotional atmosphere so that when discussing controversial issues using these skills, they prevent the increase in anger using teaching this skill; people replace positive emotions with negative emotions.It makes people interact without anger.As a result, it can be said that the learned skills will greatly affect the improvement of quality of life and the sense of satisfaction with life (24,21,22,38). Regarding the effect of the intimacy training program on happiness, it can also be said that this skill tries to educate people who first recognize and accept themselves and then listen to their husbands, talk to them, accept the spouse, and act to resolve their conflicts.The admission of the spouse makes it easy and friendly to connect (39).Teaching communication and intimacy skills consider joint decision as the fundamental principle for marital life, and has turned the hard and inflexible rules to flexible rules and thus it leads to increase in satisfaction; In addition, by improving couples' skills in conflict resolution, they increase positive emotions and feelings of satisfaction in couples, which in turn can increase quality of life.Communication skills create an emotional bond between couples, and the training of these skills makes people with realistic expectations, cognitive imbalance, right thinking, courage and expression that together lead to good relationships, happiness and welfare.Intimacy training teaches the principles of effective dialogue, including careful listening to the spouse and avoiding immediate response to him/her, and ... can reduce the emotional reactions of the spouses and help them to listen to each other and talk to each other effectively and usefully. The program also provides couples with opportunities to practice new skills and receive feedback.Additionally, in this method, with regard to assignments for subsequent sessions, they are forced to practice their skills in relation to themselves, as well as these assignments will bring the couples closer together and increases the duration of to be coupled together and this factor can increase the intimacy between them and cause happiness and welfare (20,40,41). A person who experiences higher levels of intimacy in relationships can offer himself/herself in a more desirable way and express his/her needs more effectively to his/her spouse and partner.The couples will discover new and innovative ways by learning the skills in the program, that they can feel more refreshed and more satisfying with their application.These tutorials will help couples transfer their messages more accurately, more effectively, and more efficiently, as well as it leads to exchanging positive and pleasant behaviors and reducing negative behaviors; On the other hand, the increase of positive behavioral exchanges also satisfies the emotional needs of couples, which ultimately leads to a positive feeling towards each other and happiness (22,42,43). In general, it can be said that the purpose of the intimacy training program of couples is to increase self-awareness and also to develop the ability to strengthen and maintain a happy and enjoyable relationship between husband and wife.Also this program teaches the skills that make participants take deep step in the program to self-exploration.The results of this study clearly illustrate that cognitive therapy sessions related to intimacy are useful and applicable in raising happiness and improving the quality of life of couples.This study showed that efficiency and effectiveness in improving the physical, psychological and environmental health of couples, as well as their social relationships are positive steps in this regard. Due to the simple and practical application of this theory for education and treatment, which is relatively similar to our culture, it can be used to reduce problems, increase marital satisfaction, pre and post-marital education is used by psychologists (in order to prevent, enrich and strengthen the family institution) .Therefore, it is suggested that social counselors and social workers focus on the strength of the family foundation to holding the classes of intimacy training for couples. Table 1 : Summary of intimacy training sessions based on cognitive-behavioral approach Session Contents of the sessions FirstCommunicating and making readiness (communicating and familiarizing with couples' communication status, including factors of cognitive, communication, behavioral, and conflict resolution methods, conceptualization, familiarization of couples).Second Cognitive factors (identifying couples' beliefs and expectations about intimacy and marital relationships, showing the effect of beliefs on feelings and behaviors).Assignment: Completing the cognitive table.Third Cognitive factors (studying how to take and explain the behavior of a spouse, correcting cognitive errors and replacing rational beliefs, resolving misunderstandings due to misconceptions or different perceptions and replacing rational beliefs and expectations, explaining cognitive errors, explaining the goals and realistic expectations of familiarity With mutual expectations and attention to the positive characteristics of each other).Assignment: re-completion of the cognitive table and the column of substitute thoughts.Determine the realist goals and expectations about self, spouse and intimate relationships, creation of the skill of transmitting and clear receiving, correct and effective thoughts, feelings and needs of each other.Evaluation of the problems of the sender and receiver of the message, training and educating the sender's skills and message receiver).Assignment: Practicing what the wife likes.Fifth Communication skills (assessing the message and transmitter bugs, creating skills for transferring and receiving clear and effective thoughts, emotions and needs, creating empathic understanding and listening skills, evaluating couples' patterns and barriers, practicing and teaching effective communication skills).Assignment: Practice the skills and instructions of the sender and receiver of the message.Sixth Communication skills (training and effective communication skills training).Assignment: Practicing verbal and non-verbal communication.Seventh Behavioral skills (Understanding patterns of spousal reinforcement and punishment, increasing positive behavioral exchanges and decreasing punishment, recognizing patterns of empowerment and punishment of each spouse, increasing positive reinforcements and reducing punishment).Assignment: Couples practice at least two positive behaviors and a negative behavior reduction.Eighth Problem solving skills (problem solving skills training, Doing a problem solving method for one of the problems of couples, Reduce Problems and Learn Problem Solving Skills, Examining the existing problems and evaluating spouse problem solving methods, Teaching and practicing the steps of the problem solving method, Reducing conflicts between spouses, Study of conflicts between spouses, Investigating Conflict Resolution Patterns and Its Outcomes, Training and practice conflict resolution methods). NinthConflict Resolution Skills (Studying Conflicts and Conflict Resolution of the Couples).Assignment: Compilation of conflicts and practice solving conflicts.TenthEvaluation and conclusion (evaluation of couples' change and improvement of intimate relationships) Table 2 : Descriptive statistics of the components of happiness by groups and type of test Table 3 : Leven test results for evaluating the variances of error Table 4 : Single-variable covariance analysis results to evaluate patterns of difference in happiness components http://www.ejgm.co.uk 5 / 9 Table 5 : Descriptive statistics of quality of life components by groups and type of test Table 6 : Leven test results for evaluating the variances of error Table 7 : Univariate covariance analysis results to evaluate patterns of difference in quality of life componentsMohammadifar et al. / The Effectiveness of Intimacy Training with Cognitive-Behavioral Approach on Couples' Life Quality and Happiness 6 / 9http://www.ejgm.co.uk
2019-01-02T16:32:59.821Z
2018-12-10T00:00:00.000
{ "year": 2018, "sha1": "08393378406d06b990a8c475b0081c1d6d6f3012", "oa_license": "CCBY", "oa_url": "https://www.ejgm.co.uk/download/the-effectiveness-of-intimacy-training-with-cognitive-behavioral-approach-on-couples-life-quality-7498.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "08393378406d06b990a8c475b0081c1d6d6f3012", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
220797368
pes2o/s2orc
v3-fos-license
Impact of the Film Array Meningitis/Encephalitis panel in adults with meningitis and encephalitis in Colombia The Biofire® Film Array Meningitis Encephalitis (FAME) panel can rapidly diagnose common aetiologies but its impact in Colombia is unknown. A retrospective study of adults with CNS infections in one tertiary hospital in Colombia. The cohort was divided into two time periods: before and after the implementation of the Biofire® FAME panel in May 2016. A total of 98 patients were enrolled, 52 and 46 were enrolled in the Standard of Care (SOC) group and in the FAME group, respectively. The most common comorbidity was human immunodeficiency virus infection (47.4%). The median time to a change in therapy was significantly shorter in the FAME group than in the SOC group (3 vs. 137.3 h, P < 0.001). This difference was driven by the timing to appropriate therapy (2.1 vs. 195 h, P < 0.001) by identifying viral aetiologies. Overall outcomes and length of stay were no different between both groups (P > 0.2). The FAME panel detected six aetiologies that had negative cultures but missed identifying one patient with Cryptococcus neoformans. The introduction of the Biofire FAME panel in Colombia has facilitated the identification of viral pathogens and has significantly reduced the time to the adjustment of empirical antimicrobial therapy. Introduction Meningitis and encephalitis continue to be associated with significant neurological morbidity and mortality [1], so establishing a cause and administering prompt antimicrobial therapy is crucial in improving clinical outcomes in several aetiologies [2,3]. Unfortunately, rapidly establishing an aetiological diagnosis in public hospitals in Colombia is not easy, given that most of these facilities do not have inhouse diagnostic tools such as specific polymerase chain reaction (PCR) for viruses and bacteria or latex for the detection of the capsular antigen of Cryptococcus spp. The performance of molecular diagnostic tests requires the shipping of the sample to a reference laboratory in either Medellin or Bogota with an approximate delay of 10-14 days to receive the results. Therefore, the possibilities of determining the causal agent were subject to the yield of traditional cultures, interpretation of the cerebrospinal fluid (CSF) cytochemistry and clinical findings prompting empirical therapy for the majority of patients [4]. In patients with bacterial meningitis, the sensitivity of the CSF Gram stain ranges from 10% to 93% and the CSF cultures between 50% and 85% depending on the pathogen, country of the study and by the receipt of previous antibiotic therapy [5]. This diagnostic uncertainty and the fear of an adverse outcome lead to an indiscriminate use of antimicrobials in the majority of patients. Clinicians prefer to initiate and maintain broad-spectrum therapies that include antibiotics, antivirals and in the case of HIV-positive antifungal patients, which increases not only treatment costs but also the risk of adverse events such as nephrotoxicity and Clostridium difficile diarrhoea [6]. The Biofire® Film Array Meningitis Encephalitis (FAME) panel is a multiplex PCR tool that utilises a sample of 200 μl of CSF to identify in 1 h the presence of 14 pathogens (Escherichia coli K1, Haemophilus influenzae, Listeria monocytogenes, Neisseria meningitidis, Streptococcus agalactiae, S. pneumoniae, cytomegalovirus (CMV), enterovirus (EV), herpes simplex virus 1 (HSV-1), herpes simplex virus 2 (HSV-2), human herpes virus 6 (HHSV-6), human parechovirus (HPeV), varicella zoster (VZV), Cryptococcus neoformans/C. gattii) that was approved by the Federal Drug Administration (FDA) since 2015. Diagnostic correlation studies with CSF sample banks positive for the targets identified by the test have an agreement of >90% [7,8]. The main limitation of this panel is with C. neoformans where it can be as low as 50% [8]. So far, studies have been carried out mainly in the pediatric population and in immunocompetent patients with overall good performance, although only approximately 25% of the film array panels are positive [9,10]. Currently, there are no studies evaluating the performance of this panel in Latin America. The Hospital San Jorge de Pereira is a tertiary care public hospital in the central-western region of Colombia, where the prevalence of HIV disease is one of the highest in the country. One of the most challenging infections is meningitis and encephalitis as the diagnosis is unknown in the majority of patients. In 2016, the Biofire® FAME panel was introduced. The aim of the present study was to evaluate the clinical impact of the introduction of this test in adults with meningitis and encephalitis. Materials and methods The study was divided into two cohorts of adult patients with meningitis or encephalitis: before (Standard of Care (SOC) group) and after the introduction of the Biofire® FAME panel (FAME group). The inclusion criteria include adults (age >17 years) with a diagnostic suspicion of neuroinfection who had a CSF analysis and had complete medical records. Only the first episode of infection was taken into account. Data were collected through an electronic format which include demographic variables (age, sex), clinical (comorbidities), paraclinical (CSF cytochemical), culture results, FAME panel test result, days of antimicrobial treatment, difference in hours to make adjustments to the antimicrobial with microbiological test results (timeframe between the lumbar puncture to escalation, de-escalation or discontinuation of antimicrobial therapy), days were evaluated of total hospitalisation, days of hospitalisation in ICU, state of discharge (alive or dead). The cost of the diagnostic studies was calculated for both groups according to the protocol for the study of patients with clinical suspicion of central nervous system infection. The SOC for meningitis diagnosis included blood cultures, computed tomography scans with magnetic resonance imaging testing when encephalitis was suspected. The study of CSF included culture, Gram stain, India ink stain, fungal culture. If the patient presented HIV diagnosis, the study of CSF included CMV PCR and cryptococcal antigen test. FAME studies included blood cultures, computed tomography scans, magnetic resonance imaging testing when encephalitis was suspected. The study of CSF included culture, Gram stain, India ink stain, Biofire® FAME panel. HIV and non-HIV patients were studied in the same way. According to the protocol at the hospital, patients with clinical suspicion of neuroinfection were assessed by neurology and infectious diseases. Antimicrobial treatment was initiated according to guidelines (ceftriaxone and vancomycin for bacterial meningitis, amphotericin B-antituberculous-ceftriaxone-ampicillin for patients with HIV disease, and acyclovir for suspected encephalitis) and a CSF study was performed to perform diagnostic tests. The FAME panel was requested by either the neurologist or the infectious diseases specialist, the latter one interpreted the results and adjusted the medical management as soon as the panel was done. An appropriate antimicrobial therapy was defined as a therapy initiated with a diagnosis of meningitis/encephalitis constructed by clinical signs and symptoms, laboratory findings compatible with infection, with adjustment according to Gram stain results, culture results and PCR results when those were available. The antimicrobial stewardship programme headed by an infectious diseases specialist determined the concept of appropriate therapy. The study was approved by the Institutional Review Board of the Hospital San Jorge de Pereira. Statistical analysis The results were described as frequencies and medians with interquartile ranges between FAME and SOC groups. Statistical differences were assessed by χ 2 and Fisher's exact test, when comparing categorical variables, and by Mann-Whitney U test for comparing medians. Main outcomes were time to appropriate therapy, length of stay and in-hospital mortality. Glasgow Outcome Scale (GOS), use of diagnostic methods (MRI, head CT scan, blood culture bottles) and use of antimicrobials were secondary outcomes. Results Of 118 patients reviewed, 20 patients were excluded because a neuroinfection was ruled out. A total of 98 patients met the inclusion criteria for the study. Of these, 52 (52.5%) patients were in the SOC group before the implementation of the panel and 46 (47.5%) patients were in the FAME group. The demographic characteristics and outcomes between the two groups of patients are shown in Table 1. There were no differences in regards to sex, clinical characteristics, comorbidities and outcomes (P > 0.05) but patients in the FAME group were older (P = 0.04). As shown in Table 2, there were no differences in the CSF profile between the SOC and FAME groups (P > 0.05). A positive CSF India ink and Gram stain were seen in 5% and 12% of patients, respectively, with no differences between the two groups (P > 02). A positive CSF culture was seen more frequently in the SOC group (21% vs 6.5%, P = 0.039). As shown in Table 2, 14 (27%) of the SOC had an aetiology identified and 17 (36.9%) in the FAME group. Only the FAME group identified viral aetiologies (three with cytomegalovirus and three with herpes simplex viruses). All other aetiologies in both groups were either bacterial or fungal. As shown in Table 3, only two out of four positive FAME panels for bacteria were also detected by cultures. In regards to C. neoformans: four positive FAME had negative CSF cultures, one patient had both a positive FAME and culture and one patient had a negative FAME but a positive CSF culture for C. neoformans. Overall outcomes and length of stay in the hospital or in the intensive care unit were no different between both groups (P > 0.2) (see Table 4). The median time to a change in therapy was significantly shorter in the FAME group than in the SOC group (3 vs. 137.3 h, P < 0.001). This difference was driven by the timing to appropriate therapy (2.1 vs. 195 h, P < 0.001) mostly driven by identifying viral aetiologies such as CMV and HSV in a timely fashion ( Table 5). The prolonged delay initiation in the SOC group was seen mainly in non-HIV patients with cryptococcal meningitis, and in some patients with viral encephalitis not suspected by clinical presentation or CSF characteristics. Furthermore, there was a trend that inadequate antimicrobial therapy was discontinued in a timelier manner in the FAME group (19.1 vs. 92 h, P = 0.05). Patients in the SOC group also had more blood cultures done than in those in the FAME group (P = 0.045). Patients in the FAME group had more magnetic resonance imaging of the brain, a finding associated with the acquisition of the resource by the institution during this period. As shown in Table 6, there were no differences in the use and duration of the different antimicrobial therapies between both groups (P > 0. Discussion The present study was conducted to evaluate the clinical impact of the implementation of the FAME panel as part of the protocol for the study and management of adult patients with meningitis or encephalitis in a public hospital in Colombia. The proportion of positive results of the FAME panel in this study was higher (34.7%) than seen in other previous studies (8.7% [7]; 10.4% [11]; 12.7% [9]). We had no false-positive FAME results in our study. All of the positive results had either a consistent clinical and CSF picture or an isolation in culture (for fungi and bacteria). No additional PCR tests were performed to confirm the viral aetiologies. Unlike other studies done in the USA where the main aetiologies were viruses, the main aetiological agent in our study before and after the establishment of the panel was C. neoformans. This finding is related to the high prevalence of HIV disease in the population studied (∼50%). Only patients in the FAME group were able to identify viral aetiologies such as CMV and HSV. Furthermore, the FAME panel was able to detect bacteria and C. neoformans in patients with negative CSF and blood cultures. This could be in some patients due to previous antibiotic therapy. We also observed a false-negative result in a patient with culture-confirmed cryptococcal meningitis. The presence of false negatives in cryptococcal meningitis in patients studied with the ME panel coincides with recent literature [8,12], but differs from that found in the studies conducted in Uganda where a high concordance rate was seen with culture and latex antigen [13]. Given the severity of the outcomes that may result from stopping an antifungal therapy due to a false-negative FAME panel, it is recommended that if the clinical suspicion of cryptococcosis is high not to discontinue antifungal therapy until a CSF cryptococcal latex antigen is obtained or the CSF culture is finalised [8]. The panel does not identify Epstein-Barr Virus (EBV), which has been related with neuronal inflammation, endothelial damage (even in the absence of clinical encephalitis) and central nervous system lymphoma in immunosuppressed patients, with impact in the development of cognitive dysfunction and psychiatric symptoms [14]. EBV is also associated with post-transplant lymphoproliferative disorder and occasionally with encephalitis in solid organ transplant patients' recipients [15]. In the SOC period, the aetiological diagnosis depended on the result of CSF culture, blood cultures as well as the PCR of viruses in CSF. The usual time to obtain cultures ranged between 3 and 5 days and that of the PCR between 14 and 21 days to be processed in reference laboratories outside of Pereira. During the a Glasgow outcome score: a score of 1 indicates death, a score of 2 indicates a vegetative state (inability to interact with the environment), a score of 3 indicates severe disability (unable to live independently but follows commands), a score of 4 indicates moderate disability (unable to return to work or school but able to live independently), and a score of 5 indicates mild or no disability (able to return to work or school). FAME period, there was a significant improvement in the time to obtain the results that were associated with changes in empirical antimicrobial, both initiating appropriate therapy and discontinuing inappropriate therapy. The difference in costs of including the Biofire® FAME panel was not significant compared to SOC. There was a tendency of reduction in the cost of antimicrobial therapy and the cost of diagnostic studies with the implementation of the Biofire® FAME panel. To our knowledge, this is the first study documenting this impact of the FAME panel in Colombia. This also resulted in the practice of repeating blood cultures in the FAME period. A recent study evaluated the cost-effectiveness of setting up the ME panel, it was found that the main benefit from the economic point of view was in the decrease in the consumption of antimicrobials [16]. This finding contrasts with another study [10] where there was no timely suspension of inappropriate therapy according to the results of the panel, which reinforces the importance of the implementation of the test in the context of an antimicrobial stewardship programme. Given the costs of the test, it is currently unclear if all CSF samples with suspected neuroinfection or only those with CSF pleocytosis should undergo testing with the panel. A paediatric study showed that EV may present without pleocytosis in infants suggesting that it may be cost-effective in a universal approach with the ME panel by reducing the number of studies and days of hospitalisation [17]. Limitations of the study included the limited number of patients who underwent the test, the completion of the study in a single centre and the non-confirmation of the positive results of the virus with another test. Furthermore, the sample size also did not allow a differential analysis of results by type of aetiology, which may show differences with respect to the global analysis and possibly was associated with a low power in detecting an impact on clinical outcomes. As for strengths, this study is the first to evaluate the clinical impact of the implementation of the Biofire® FAME panel test in Colombia as part of the study plan of adult patients with clinical suspicion of neuroinfection. Given the results presented, it is considered that the test is useful in the study of patients with neuroinfection mainly in the reduction of antimicrobial consumption in the context of an antibiotic control programme and directed according to alterations of the CSF cytochemistry. In conclusion, the implementation of the FAME panel in a public hospital in Colombia resulted in a more rapid diagnosis that improved the detection of pathogens and had an impact on appropriately modifying the empirical antimicrobial management of patients but had no impact of lengths of stay or outcomes. Future studies should validate these results in other Latin American countries. Financial support. Grant A Star Foundation. Epidemiology and Infection 5
2020-07-28T13:04:13.182Z
2020-07-27T00:00:00.000
{ "year": 2020, "sha1": "efa6aa205c001e8f686ee5a357fd6b4121ba8db5", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/941446AA7A993E25492FDC54A928C07E/S0950268820001648a.pdf/div-class-title-impact-of-the-film-array-meningitis-encephalitis-panel-in-adults-with-meningitis-and-encephalitis-in-colombia-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c65de26bc93ff057d8eb609ec8e224e508a2f61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12257804
pes2o/s2orc
v3-fos-license
Impact of large-x resummation on parton distribution functions We investigate the effect of large-x resummation on parton distributions by performing a fit of Deep Inelastic Scattering data from the NuTeV, BCDMS and NMC collaborations, using NLO and NLL soft-resummed coefficient functions. Our results show that soft resummation has a visible impact on quark densities at large x. Resummed parton fits would therefore be needed whenever high precision is required for cross sections evaluated near partonic threshold. A precise knowledge of parton distribution functions (PDF's) at large x is important to achieve the accuracy goals of the LHC and other high energy accelerators. We present a simple fit of Deep Inelastic Scattering (DIS) structure function data, and extract NLO and NLL-resummed quark densities, in order to establish qualitatively the effects of soft-gluon resummation. Structure functions F i (x, Q 2 ) are given by the convolution of coefficient functions and PDF's. Finite-order coefficient functions present logarithmic terms that are singular at x = 1, and originate from soft or collinear gluon radiation. These contributions need to be resummed to extend the validity of the perturbative prediction. Large-x resummation for the DIS coefficient function was performed in [1,2] in the massless approximation, and in [3,4] with the inclusion of quark-mass effects, relevant at small Q 2 . Soft resummation is naturally performed in moment space, where large-x terms correspond, at O(α s ), to single (α s ln N) and double (α s ln 2 N) logarithms of the Mellin variable N. In the following, we shall consider values of Q 2 sufficiently large to neglect quark-mass effects. Furthermore, we shall implement soft resummation in the next-to-leading logarithmic (NLL) approximation, which corresponds to keeping terms O(α n s ln n+1 N) (LL) and O(α n s ln n N) (NLL) in the Sudakov exponent. To gauge the impact of the resummation on the DIS cross section, we can evaluate the charged-current (CC) structure function F 2 convoluting NLO and NLL-resummed MS coefficient functions with the NLO PDF set CTEQ6M [5]. We consider Q 2 = 31.62 GeV 2 , since it is one of the values of Q 2 at which the NuTeV collaboration collected data [6]. In Fig. 1 we plot F 2 (x) with and without resummation (Fig. 1a), as well as the normalized difference ∆ = (F res 2 − F NLO 2 )/F NLO 2 ( Fig. 1b). We note that the NuTeV data at large x. The results of the comparison are shown in Fig. 2: although the resummation moves the prediction towards the data, we are still unable to reproduce the large-x data. Several effects are involved in the mismatch: at very large values of x, power corrections will certainly play a role. Moreover, we have used so far a parton set (CTEQ6M), extracted by a global fit which did not account for the NuTeV data. Rather, data from the CCFR experiment [7], which disagree at large x with NuTeV [6], were used. The discrepancy has recently been described as understood [8]; however, it is not possible to draw any firm conclusion from our comparison. We wish to reconsider the CC data in the context of an indipendent fit. We shall use NuTeV data on F 2 (x) and xF 3 (x) at Q 2 = 31.62 GeV 2 and 12.59 GeV 2 , and extract NLO and NLL-resummed quark distributions from the fit. F 2 contains a gluon-initiated contribution F g 2 , which is not soft-enhanced and is very small at large x: we can therefore safely take F g 2 from a global fit, e.g. CTEQ6M, and limit our fit to the quark-initiated term F q 2 . We choose a parametrization of the form F q The best-fit parameters and the χ 2 per degree of freedom are quoted in [9]. In Fig. 3, we present the NuTeV data on F 2 (x) and xF 3 (x) at Q 2 = 12.59 GeV 2 , along with the best-fit curves. Similar plots at Q 2 = 31.62 GeV 2 are shown in Ref. [9]. In order to extract individual quark distributions, we need to consider also neutral current data. We use BCDMS [10] and NMC [11] results, and employ the parametrization of the nonsinglet structure function F ns 2 = F p 2 − F D 2 provided by Ref. [12]. The parametrization [12] is based on neural networks trained on Monte-Carlo copies of the data set, which include all information on errors and correlations: this gives an unbiased representation of the probability distribution in the space of structure functions. Writing F 2 , xF 3 and F ns 2 in terms of their parton content, we can extract NLO and NLL-resummed quark distributions, according to whether we use NLO or NLL coefficient functions. We assume isospin symmetry of the sea, i.e. s =s andū =d, we neglect the charm density, and impose a relations = κū. We obtain a system of three equations, explicitly presented in [9], that can be solved in terms of u, d and s. We begin by working in N-space, where the resummation has a simpler form and quark distributions are just the ratio of the appropriate structure function and coefficient function. We then revert to x-space using a simple parametrization q( Figs. 4-5 show the effect of the resummation on the up-quark distribution at Q 2 = 12.59 and 31.62 GeV 2 , in N-and x-space respectively. The best-fit values of D, γ and δ , along with the χ 2 /dof, can be found in [9]. The impact of the resummation is noticeable at large N and x: there, soft resummation enhances the coefficient function and its moments, hence it suppresses the quark densities extracted from structure function data. In principle, also d and s densities are affected by the resummation; the errors on their moments, however, are too large for the effect to be statistically significant. In [9] it was also shown that the results for the up quark at 12.59 and 31.62 GeV 2 are consistent with FIGURE 4. NLO and resummed up quark distribution at Q 2 = 12.59 GeV 2 in moment (a) and x (b) spaces. Following [9], in x space, we have plotted the edges of a band corresponding to a prediction at one-standard-deviation confidence level (statistical errors only). NLO perturbative evolution. In summary, we have presented a comparison of NLO and NLL-resummed quark densities extracted from large-x DIS data. We found a suppression of valence quarks in the 10 − 20% range at x > 0.5, for moderate Q 2 . We believe that it would be interesting and fruitful to extend this analysis and include large-x resummation in the toolbox of global fits. Our results show in fact that this would be necessary to achieve precisions better than 10% in processes involving large-x partons.
2014-10-01T00:00:00.000Z
2005-07-04T00:00:00.000
{ "year": 2005, "sha1": "40cf9ef11cac69103c6dda1d479a9f53e41c1904", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0507042", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "40cf9ef11cac69103c6dda1d479a9f53e41c1904", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
266872289
pes2o/s2orc
v3-fos-license
From a glimpse into the key aspects of calibration and correlation to their practical considerations in chemical analysis In this tutorial review, we provide a guiding reference on the good practice in building calibration and correlation experiments, and we explain how the results should be evaluated and interpreted. The review centers on calibration experiments where the relationship between response and concentration is expected to be linear, although certain of the described principles of good practice can be applied to non-linear systems, as well. Furthermore, it gives prominence to the meaning and correct interpretation of some of the statistical terms commonly associated with calibration and regression. To reach a mutual understanding in this significant field, we present, through a practical example, a step-by-step procedure, which deals with typical challenges related to linearity and outlier assessment, calculation of the associated error of the predicted concentration, and limits of detection. The utilization of regression lines to compare analytical methods is also elaborated. The results of regression and correlation data are acquired by implementing the Excel spreadsheet of Microsoft, being perhaps one of the most widely used user-friendly software in education and research. Graphical Abstract Introduction Regression and correlation are the most used techniques for investigating the relationship between two quantitative variables [1,2].There is, though, a key difference between them; regression is how one variable affects another and correlation measures the degree of a relationship between two independent variables (x and y).A good example of regression analysis is the construction of an analytical calibration curve, where the instrument response (the dependent variable) depends upon the concentration of the analyte (independent variable).In other words, if someone aims to analyze the effect of how an independent variable is numerically associated with a dependent variable, then the use of regression is mandatory [3].The ultimate goal of calibration involves the prediction of the concentration of an analyte from a single instrumental response after setting up the above relationship between the values of known samples (i.e., standards with known amounts of analyte present) and instrument responses.Calibration curves (or graphs or plots) are the bread and butter of analytical chemistry, and their examination is an important step in any method validation or application. Although the common name of the resulting plot is "calibration curve," analytical chemistry researchers, typically, attempt fitting a linear function.Calibration, typically, involves the proper preparation of a set of standards containing a known amount of the analyte of interest, measurement of the instrument response for each standard, and establishment of the relationship between them.Based on a certain number of measurements of standards, two fitting techniques for the linear regression model can be established to set up the calibration curve and estimate the slope and the intercept of a linear calibration model: the ordinary least-squares (OLS) and weighted least-squares (WLS) [4,5].The testing of the reported linearity of a calibration curve should be an everyday task in routine analytical operations.While a great deal of effort is put in the selection and calibration of an analytical instrument, the choices behind curve calculation are usually overlooked.However, neglecting the consideration of statistics of calibration can lead to unfortunate conclusions as there is no advanced instrumentation or additional measurements that can rescue sound data from an erroneously built calibration curve.A calibration curve should either confirm the analyte-response relationship or raise an alert of the presence of a problem, which should properly be addressed. Critical to the success of calibration and correlation is the understanding of the limitations and statistics used to set up a curve.The aim of this tutorial review is to provide a good practice guide in building calibration and correlation experiments and to explain how the results should be evaluated and interpreted.It centers on calibration experiments where the relationship between response and concentration is expected to be linear, although many of the principles of good practice described can be applied to non-linear systems, as well.First, we provide a general-case argument for the minimum number of standards required by regulatory guidance.Then, we examine the poor success rate of simple outlier detection in calibration curves using equidistant (linear) and logarithmic scale standard spacing, as well as we look into the significant risks associated with extrapolating the curve (where appropriate) beyond the linear response region.To enhance the understanding of this significant field, we present, through a practical example, a step-by-step procedure dealing with typical challenges related to regression, outlier assessment, procedures for linearity testing, calculation of the associated error of the predicted concentration and the limits of detection.The utilization of the concepts of correlation and agreement to compare analytical methods is also elaborated.The regression data results are acquired by utilizing the Excel spreadsheet of Microsoft, being perhaps one of the most widely used user-friendly software in educational settings. Handling calibration data sets One of the first questions analysts often ask is: "How many calibration standards do we need to measure and what is the number of replicates at each calibration level?"Before answering this question, the purpose of the calibration experiment must be defined.It is necessary to make a distinction between the calibration of a measurement system and the check of the validity of the calibration of a measurement system.To minimize the risk of error associated with improper calibration of a measurement system, international guidance dictates a minimum number of calibrators and the threshold at which a measurement becomes an outlier.Regulatory guidance provides the minimum required number of standards to establish the calibration curve.For an assessment of the calibration function, as part of a method validation, for example, standards with at least seven different concentrations should be included.The EURACHEM Guide "The Fitness for Purpose of Analytical Methods" and the draft guidance from the USFDA mandate a minimum of seven calibration standards-six plus zero concentration standards-to perform the calibration [6,7].Other documents lay down a different number of calibration levels.For example, ISO standard 15,302:2007 specifies four calibration levels and Commission Decision 2002/657/EC stipulates at least five concentration levels (including blank) for the construction of a calibration curve [8], as a minimum requirement for an assessment of the calibration function.ISO standard 8466-1:1990 demands ten calibration levels [9].However, these requirements do not explain why the curve should be drawn with this number of points and not with more or less than that.The sample with zero analyte concentration should definitely be included as it allows us to gain better insight into the region of low analyte concentrations and detection capabilities. The design of calibration experiments and the number of calibration levels depend very much not only on the purpose of the experiment but also on the existing knowledge.Less knowledge about the shape of the calibration functions requires performing initial assessment measurements on more concentration levels.Ideally, the calibration range should be drawn so that the concentrations of the analyte in the test samples fall in the center of the range, where the uncertainty associated with the predicted concentrations is minimized.It is also useful to make at least triplicate independent measurements at each concentration level, particularly at the method validation stage, as it allows the precision of the calibration process to be evaluated, at each concentration level.Analyte calibration solutions should be prepared from a pure substance with a known purity value or a solution of a substance with a known concentration. The standard concentrations should not only cover the range of concentrations encountered during the analysis of test samples but also, they should be evenly spaced across the range (for wide calibration ranges, partial arithmetic series could be considered).However, the risk of leverage arises from the introduction of error into the measurement in the calculated curve, even in the absence of an outlier (vide infra).Leverage can be a concern if one or two of the calibration points are far from the others along the x-axis (near the ends of a calibration curve), where any error has a disproportionate effect on the curve.Even if these points are not outliers, they may have a leverage to a certain degree.In other words, a relatively small error in the measured response will have a significant effect on the position of the regression line.Often, this situation arises when calibration standards are prepared by sequential dilution of solutions.The procedure frequently employed in the preparation of calibration standards is to prepare the most concentrated standard and then dilute it by, say, 50%, to obtain the next standard.This standard is subsequently diluted by 50% and so on (e.g., 64 μg L −1 , 32 μg L −1 , 16 μg L −1 , 8 μg L −1 , 4 μg L −1 , 2 μg L −1 ).This procedure is not recommended as, in addition to the lack of independence, the standard concentrations will not be evenly spaced across the concentration range, leading to a leverage (e.g., the concentration of 64 μg L −1 in Fig. 1).As a result, the calculated slope and intercept might be disproportionally affected by that data point. Linearity in calibration and its misinterpretation Linearity is an important feature for any analytical method.If the calibration function is linear, then, the estimation of the equation is easier and errors in estimating the concentrations of unknown samples from the calibration equation are likely to be smaller.The correlation coefficient (r) of a calibration graph or the R-squared (R 2 ) or coefficient of determination is usually employed as an indicator of linearity, by inspecting its closeness to 1. Strictly speaking, r is a measure of the relationship between two variables x and y.Its use in calibration, though, is based on a widespread misunderstanding; if the calibration points are clustered around a straight line (and this is not the unique case), the experimental value of r will be close to unity.However, the opposite is not true.The International Union of Pure and Applied Chemistry (IUPAC) discourages the use of r to assume linearity in the relationship between concentration and analytical response.This is expressed by the excerpt from Ref. [4]: "... the correlation coefficient, which is a measure of the relationship of two random variables, has no meaning in calibration..." Furthermore, when a new analytical method is developed, the guide for authors of the Journal of Chromatography Α explicitly states that "claims of linearity should be supported by regression data that include slope, intercept, standard deviations of the slope and intercept, standard error and the number of data points; correlation coefficients are optional."That is, the criterion of r to provide a measure of the degree of linear association between concentration and signal is weak. In view of the above, the perception of linearity based on the criterion of correlation coefficient has been overturned by relevant statistical tests, in the last years.The assessment of the linearity can be carried out by resorting to the analysis of variance (ANOVA) of the calibration data [10].In this methodology, a comparison of the so-called lack-of-fit (LOF) variance with the squared pure error is made through an F-test (See the "Practical example" and "Procedures for linearity assessment" section); however, it is essential to consider the error components of the regression.The prediction error for each data point can be measured as the difference between the observed response ( y ij ) and the predicted response ( ŷij ), i.e., y ij − ŷij .To assess the overall prediction error, we calculate this difference for every data point, square it, and then sum all these squared differences, resulting in a term known as the "sum of squares of the residuals (SS RES )."The degrees of freedom (d.f.) associated with SS RES is [p (concentration levels) × n (replicates)] − 2 since we estimate the slope and the intercept (i.e., two parameters). This residual error, SS RES , can be decomposed into two constituent components: the "sum of squares for lack of fit (SS LOF )" and the "sum of squares for pure error SS PE ," e.g., SS RES = SS LOF + SS PE (Fig. 2). When a line effectively fits the data, it implies that the average of observed responses y i at each x-value closely aligns with the predicted response (ŷ ij ), for that specific x-value.Consequently, to assess the extent to which the overall error arises from model inadequacy, we gauge the deviation between the average observed response at each x-value and the predicted response for each data point, i.e., y i − ŷij (Fig. 2).To measure the complete lack of fit of the model, we calculate this distance for every calibration point, square it, and sum all these squared differences to obtain the SS LOF (associated with p-2 degrees of freedom). To assess the portion of the overall error attributed solely to random fluctuations, we examine the extent to which each observed response ( y ij ) deviates from the average observed response ( y i ) at each corresponding concentration (x-value), i.e., y ij − y i .Similarly, the total pure error is calculated by summing the squared differences for each calibration point to get the SS PE (p × n − p d.f.).When the respective sum of squares is divided by their associated degrees of freedom, the mean squares (MS) are obtained.It is those mean squares from which we can calculate the F-statistic, as follows: It is advisable that readers, for further reading, peruse the excellent brief reports edited by the Analytical Methods Committee of the Royal Society of Chemistry [11] and the textbook "Calibration and Validation of Analytical Methods" edited by Mark T. Stauffer [12], which seek to introduce the readers to current methodologies of analytical calibration. Ηomogeneity and non-homogeneity of variances Most of the reports in the literature refer to OLS, which practically, should be only used when experimental data have constant variance (homoscedasticity).In contrast, WLS is more appropriate when the variance varies (heteroscedasticity) or in other words, when every calibration point does not have an equal impact on the regression.There are several tests for homogeneity of more than two variances [13].A simple way for testing homoscedasticity is to plot the residuals calculated from the straight line obtained by using the OLS method.A horizontal band of residuals indicates constant variance and unweighted least squares regression is recommended.A trumpet-shaped opening toward larger values signifies increasing variability as concentration increases. From a practical viewpoint, when a narrow concentration range is considered, the unweighted linear model is usually adapted while a larger range may require a weighted model.If the weight is estimated incorrectly, the calculated estimators of regression coefficients (slope and intercept), being sensitive to extreme data points will be biased, with a concomitant negative impact on the predicted concentration intervals for real samples.It is noted that ignoring the inhomogeneity of variances will not sacrifice much statistical reliability when working in the mid-range of the calibration curve.Nonetheless, the WLS can reduce the limit of quantification and enable a broader linear calibration range with higher accuracy and precision, especially for bioanalytical methods.Depending on the characteristics of the data set, the weighting factors can be employed in a number of different ways [14].The incorporation of heteroscedasticity into the calibration procedure is recognized by ISO and USFDA; the latter recommends that "the simplest model that adequately describes the concentration-response relationship should be used.Selection of weighting and use of a complex regression equation should be justified" [15].Raposo, in his tutorial review, provides an illustrative example selected from the literature that best suits the weighting approach in calibration [5].A more detailed examination of this topic is beyond the scope of this review. Non-linearity It may be the case that several analytical methods exhibit good response over a broad concentration range (i.e., several orders of magnitude).Because of this behavior, it is helpful to construct a double logarithmic plot based on the raw data.Importantly, the resulting plot serves the purpose of proving the method response over this broad concentration range but not of fitting linear function to the data sets.In other cases, non-linear dependence of analytical response on concentration is likely to appear, for instance, in analytical methods based on electrochemistry (Fig. 3).The method that leads to such a calibration data set may have unacceptably low or very low sensitivity (note the low slope in the concentration range above 10 nM in the raw data calibration plot of Fig. 3A).To cope with this situation, some researchers choose to tap into the logarithms of concentration values and the analytical response (Fig. 3B), thus claiming linearity of the method over the extended concentration range.Using raw data or data after logarithmic transformation primarily depends on analytical principles. Fig. 2 Error components in regression analysis Arguably, associating the analytical response with the logarithm of concentration in handling calibration data sets may not be the best choice, as it can easily lead to misinterpretation of the analytical performance.A legitimate practice in this case is to consider displaying calibration data sets in plots with linear axes (equidistant scale) fitted with logarithmic or other nonlinear functions.Nonetheless, when the non-linear relationship is fitted for a small number of concentration levels this makes the process unreliable.Any curve that includes a non-linear response should have sufficient additional points between the upper limit of linearity and the upper limit of quantitation to describe the inflection point.Unless a great number of concentration levels are included in the data set, the plateau interval within a fitted nonlinear function cannot be used for quantitative analysis. In 2020, P.L. Urban published a Perspective on the dependence between analytical response and logarithm of concentration, with the deterring title "Please Avoid Plotting Analytical Response against Logarithm of Concentration" [16].The author puts forward a well-argued case for proper data treatment, where a non-logarithm of concentration is displayed in the x-axis of the calibration plot, threatening bad results if someone does not follow it.Others, in defense of the application of logarithmically transformed data, claim that enzyme-catalyzed reactions or electrochemical data in logarithmic form are more appropriate for function fitting [17]. Outliers' assessment in linear regression An outlier is an experimental measurement that is significantly different from the rest of the entire data set.In the case of calibration, an outlier appears as a point which is well apart from the trend of the other calibration points and introduces a leverage or bias into the position of the line.Once in the middle of the calibration range, the outlier can shift the regression line up or down (Fig. 4A).The slope of the line will approximately be correct but the intercept will be wrong.In this case, a bias is introduced.An outlier at the extremes of the calibration range will change the position of the calibration curve by tilting it upwards or downwards (Fig. 4B).The outlier is said to have a degree of leverage. After estimating the regression parameters, the plot needs to be examined to identify any data points that deviate significantly from the remaining data set (considering the assumption of linear calibration).For this purpose, initially, the residuals (y i − ŷ ) are calculated and graphed against their corresponding concentration levels.Two horizontal that lie outside these lines are considered qualitative outliers.Details are given in the "Practical example" and "Assessment of outliers" section.With respect to identifying outliers in the case of fitting curves with nonlinear regression, a method has been proposed, which combines robust regression and outlier removal [18].Although this is not an easy task, the analysis of simulated data demonstrated that this procedure identified outliers from nonlinear curve fits with reasonable power and few false positives. Standard addition(s) method The use of pure standard solutions to establish the ordinary calibration graph assumes that there is no reduction or enhancement of signal by other matrix components of real samples.Hence, the so-called direct calibration method of analysis can be applied.However, in many areas of analytical chemistry, this assumption is not valid as matrix effects can occur even with methods that have the reputation of being relatively free from interferences.A solution to this problem is that all the analytical measurements, including the construction of the calibration graph, can be performed using the sample itself, thus applying the standard addition(s) method.Matrix effects should be ascertained previously; this can be done by testing the statistical equality of the slopes of the lines arising from the direct calibration and standard addition methods.If slopes are demonstrated to be statistically different at a certain confidence level (and the adequate number of degrees of freedom), the use of the standard addition method is undoubtedly the most favorable choice [19]. Note that sound evidence of linearity of the direct calibration method is a requisite for the use of the method of standard addition and the extrapolation involved.Also, standard addition cannot be applied if the intercept of the regression equation estimated by pure standard solutions in the direct calibration method is not zero, or close to zero (the zerointercept assumption is seldom plausible).Unless the intercept is zero, the method will show a positive error compared with the true concentration of the target in the sample. Based on the above, once it has been established that the linearity of the direct calibration method fits the data, a statistical test should be carried out for zero intercept (see the "Practical example" and "Procedures for linearity assessment" section).This can be judged by inspecting the confidence interval of the intercept a, i.e., a ± t × s a (s a is the standard error of a and t the two-tailed Student's with n-2 degrees of freedom).If zero value is included within the confidence interval ± t × s a , then, the intercept is statistically zero.Provided that zero-intercept has been demonstrated, the standard addition approach adheres to the following procedure: (i) take several portions of the (treated) solution (six or seven at a minimum) and add a known amount of the analyte to each of them; (ii) the amounts added are (almost) evenly spaced from zero to maximum.Importantly, the total volume of all the treated solutions should be kept constant; (iii) the responses are measured, and the original concentration is estimated by extrapolation of the line to zero response. Despite its capacity, the implementation of standard addition implies certain limitations.The extrapolation causes the technique to perform poorly in narrow (linear) calibration ranges.When carrying out such an experiment it is, therefore, recommended to add several times the original analyte concentration.Also, extrapolation degrades the precision compared with direct calibration where interpolation is exploited and hence, the uncertainty is generally increased.More details about its application are beyond the scope of this review. Standard deviation vs standard error From a properly constructed calibration plot, the analyst expects that a reliable calculation of the concentration of analyte in tested samples can be made.No quantitative result is of any value unless it is accompanied by a realistic estimate of uncertainty, i.e., the range within which the true value of the quantity being measured should lie.At this stage, the residual standard deviation (or error) can be used as an estimate of the uncertainty in predicted concentration values.This is due to the precision of measurements (as represented by the residual standard deviation) being an important factor in assessing the uncertainty.Additionally, the regression model can be employed for estimating the limit of detection of the analytical procedure (see the "Practical example" and "Calculation of a concentration, its error, and the limits of detection" section).Hence, the random errors in the slope (S b ) and intercept (S a ) values hold significance (see the "Practical example" and "Assessing the scatter plot and performing regression analysis" section)."Standard error of mean" (SEM) and "standard deviation" (SD) are employed in different contexts and have different interpretations and calculations; however, they are often confused and misused and for this reason, they deserve discussion [20,21].The SD may be a good estimate of the variability of the population from which the data (i.e., the statistical sample) was drawn.For normally distributed data, about 99% of them will have values which lie within 3 × SD of the mean value while the other 1% is scattered above and below these limits. Evidently, in case that widely scattered measurements are to be expressed, the standard deviation should be quoted. As the sample mean varies from statistical sample to sample in a population, the way this variation occurs is expressed by the "sampling distribution of the mean."Here, the SEM is a type of standard deviation, which expresses the precision of the sample mean and is calculated by the simple relation: The SEM is always smaller than the SD; as the sample size increases the standard error decreases.From a practical viewpoint, if the aim is to obtain an insight into the uncertainty around the estimate of the mean value of the measurements-this is almost invariably the case in analytical chemistry-reporting the SEM is the most useful and reliable way of calculating a confidence interval.For a large sample, a 95% confidence interval is obtained as values of 1.96 × SEM either side of the mean.To report a 95% confidence interval instead of a 99% one is only a matter of choice, and has become a convention, related to calling statistically significant a p-value lower than 0.05. the analyst/ researcher should appreciate that the contrast between these two terms reflects the distinction between description statistics (i.e., SD) and inference statistics (i.e., SEM).Standard deviation is the degree to which individual values within a statistical sample differ from the sample In contrast, SEM gauges how close the sample mean is likely to be to the population mean. Finally, in many publications/analytical reports, the sign ± is used to join the SD or SEM to an observed meanfor example, 13.4 ± 2.3 or 13 ± 2. However, this notation does not give an indication of whether the second figure is the SD or SEM.Analysts/researchers are advised to indicate clearly whether standard deviation or standard error is being quoted. Correlation vs agreement As mentioned above, correlation allows researchers to know the association or the absence of a relationship between two variables.When these variables are correlated, we can measure the strength of their association.Tasks pertinent to the correlation in analytical chemistry often involve demonstrating a degree of association between analytical methods.These may be carried out by investigating a relationship between a new method and an official/reference/ alternative interference-free method.When researchers seek to report this association, correlation or agreement is often used.However, the term "correlation" as a synonym of "agreement" can be misleading in any field of research [22]. This part aims to clarify the definition of these two terms in method development.Correlation coefficient r does not provide any information on the agreement between two variables.Under certain conditions, the magnitude of r only provides information on how close the points lie to the regression line (see above, the "Linearity in calibration and its misinterpretation" section).The correlation analysis assumes that the distribution of one variable does not depend on the other, which is the case when comparing two different analytical methods.Also, r does not assume normality but it does assume homoscedasticity, i.e., constant variance.Inspection of the available data can reveal whether the correlation is linear or non-linear.In this context, a scatterplot of data should always be the first step before interpreting a correlation coefficient, to avoid incorrect assumptions about the relationship between the variables of interest.When calculating r, statistical software reports a p-value which represents the chance that a significant linear correlation does not exist between the variables (this is the null hypothesis).A significantly non-zero r means that there is a dependence between x and y.Here, "significant" means how far would expect r to be from zero and depends on both the number of measurements, n, and the distribution of each.A low p-value (≤ 0.05) provides evidence that the measured r represents a significant correlation between two variables.When a correlation exists, linear regression enables the calculation of the equation that minimizes the distance between the fitted line and all the data points in the sample.The R-squared (R 2 ) or coefficient of determination commonly used in analytical chemistry, is another measure often encountered in linear regression analysis. The term agreement is distinct; it is used to assess whether the measurements by two analysts or two different methods yield similar results.In the case of comparing the concentration of an analyte via two methods, we are measuring the same analyte in the same sample with different methods.Τhe two values may be correlated but do not necessarily agree.In this case, r is not sufficient.Even if the two methods are likely to be highly correlated with an r approaching unity, this does not provide concluding evidence that the methods agree.A high r value may indicate agreement but this remains false without further analysis to assess potential biased results.Again, by inspecting the relationship on a scatter plot it may be easy to see if they demonstrate poor agreement with significant bias. Ordinary least-square regression could potentially be used to assess agreement; however, this regression assumes that one method is error-free and this is not true in most settings.When agreement between paired measurements is required, a Bland-Altman plot, also referred to as a difference plot, is a straightforward and reliable alternative to a scatter plot [23].Practically, it is about a plot of the difference between two measurements on the y-axis (Method 1 minus Method 2) against the mean of the two measurements on the x-axis ({Method 1 + Method 2}/2).This simple plot reveals any bias between measurements, which is the difference between Method 1 and Method 2. Limits of agreement (LOA) are plotted as separate lines, demonstrating the range within which 95% of the differences between one method and the other are included.These limits are expressed as: mean observed difference (m d ) ± 1.96 × standard deviation of the observed As alluded to above, correlation is not synonymous with agreement.Correlation refers to the presence of a relationship between two different variables, whereas agreement refers to the concordance between two measurements of one 3294 SUM = 0.0000278516 degrees of freedom, d.f.: p × n − 2 = 7 × 5 − 2 = 33 (S y/x ) 2 = 0.0000278516/33 = 0.0000008440 S y/x = 0.00091869 t (0.05, p − 2) = 2.571 ± t (0.05, p − 2) × S y/x = 0.0024 (dashed line) variable.Two sets of measurements, which are highly correlated, may have poor agreement.However, if the two sets agree, they will surely be highly correlated. Practical example To showcase the suitability of the suggested approach, we utilize data from a method that has been internally validated for the determination of Cd in natural water (Table 1). Assessing the scatter plot and performing regression analysis The process that allows for a quick identification of any issues with the calibration data is the visual inspection of the calibration plot (Fig. 5). Based on the findings, a typical approach would be to assume that the response is linearly correlated to the concentration.Regression determines the optimal values of slope (indicated as "b") and intercept (indicated as "α"), that best describe the linear relationship between the analyte level (x) and the analytical signal (y).Typically, regression analysis is conducted using specialized software provided with instruments or popular packages like Excel (Table 2). According to the Excel spreadsheet output, the instrumental response is linearly related to the Cd concentration (independent variable-x) based on OLS regression, in the form of y = b (± S b ) x + α (± S a ), as follows (with rounding to an appropriate number of significant digits): Assessment of outliers The differences between the observed values (y i ) and the predicted values ( ŷ) (Table 3) are computed or can be gener- ated from spreadsheet software programs. (1) y = 0.01115 (± 0.00008) x − 0.0017 (± 0.0003) These differences are then plotted against their respective concentration levels as shown in Fig. 6.In this plot, two horizontal dashed lines, which indicate the acceptable range of deviation for each individual data point, are drawn at ± t (0.05, p-2) × S y/x . Another straightforward numerical criterion to identify potential outliers is to check if standardized residuals (a residual divided by S y/x is commonly known as a standardized residual) greater than 3 are found (last column of Table 3).Similarly, there is also no indication of an outlier in this hypothetical scenario. Note: Other more sophisticated calculation techniques include the estimation of Cook's squared distance for each data point.However, before applying these, it is important to identify (based on the above) the suspect calibration point that could potentially be omitted. Procedures for linearity assessment As mentioned before, R 2 values can be misleading in the context of linearity evaluation.Examining the plot of residuals generated through linear regression of the responses against the concentrations is an option to assess the linearity.The residual plot (Fig. 6) exhibits a random pattern within the range, without any discernible systematic pattern, indicating that the linearity assumption is likely correct.However, there are instances when this visual inspection may be quite personal and open to interpretation. Note: By plotting the ratio of the individual response values to their corresponding concentrations vs the concentration range could be another option (Huber's linearity test [5]).The lower and upper limits of tolerance are established by multiplying the median value of the ratio of individual response values to their corresponding concentrations with constant factors of 0.95 and 1.05, respectively.The calibration range falls within the linear range as there are no results that exceed the tolerance limits.To ensure the accuracy of the chosen model (Eq.1), it is essential to employ reliable statistical tools that can confirm the assumption of linearity.This necessitates the use of robust statistical methods, such as ANalysis Of VAriance (ANOVA). If the regression F-value (found in the Excel output, Table 2) is greater than the critical F (0.05, 1, p × n-p), then the regression model is considered acceptable.This indicates that the variations in the response can be explained by the model. The ANOVA-LOF, also known as the LOF (lack of fit) test, is the most common and reliable statistical test that is applied to calibration experiments for the acceptance of the model linearity.The ratio of F LOF (MS-LackOfFit / MS-PureError ) is compared to critical F (0.05, p-2, p × n-p).If F LOF is equal or less than critical F, it is possible to accept the hypothesis that the regression model is linear.From Table 3, we obtain the residual error sum of squares 4). If there is no significant lack-of-fit, it is advisable to conduct a final t-test to determine if the intercept significantly deviates from zero.If the calculated t-value (= a/S a ) is equal or lower to the critical t (d.f.: p − 2, confidence limit: 99.9%, although a 95% is more common), the null hypothesis that the intercept is not significantly different from zero is accepted.From Eq. 1, we calculate t = 0.0 017/0.0003= 5.666 > 4.032 t crit (0.01, 5).Therefore, the calibration curve should not be forced through zero and it is described properly by Eq. 1, as given above. Note: Arithmetically, the decision to force zero or not can be based on the comparison of y-intercept (α) to its standard error (S a = 0.000280046, Table 2).If y-intercept > S a , then b ≠ 0, otherwise b = 0. Calculation of a concentration, its error, and the limits of detection Assuming that homoscedasticity is fulfilled, the regression line computed in the preceding section will be utilized for estimating the concentrations of test samples through interpolation as well as its error.Additionally, it may be employed for estimating the limit of detection of the analytical procedure.Hence, the significance lies in the random errors associated with the values of the slope (S b ) and intercept (S a ). The unknown sample concentration can readily be determined by substituting the sample signal or response of 0.030 (for k = 5 replicates) into the regression equation (Eq.1).This yields x value of 2.84024 μg⋅L -1 .To calculate the overall error in the corresponding concentration we employ the following formula [2]: where k is the number of replicate measurements we use to establish the sample's average signal y, p is the number of calibration points, y is the average signal for the calibration standards, x i and x are the individual and the mean concen- trations for the calibration standards and b is the calculated slope from the regression equation (Eq.1). Confidence limits can be calculated as μ = x o ± t × s xo , (p − 2 d.f.).In our case, x o = 2.84024 μg⋅L -1 , s xo = 0.04833 μg⋅L -1 , (2) As we have seen, the limit of detection (LOD) can be described as the concentration of the analyte that produces a signal equivalent to the blank signal, y B , augmented by three times the standard deviations of the blank, s B : LOD = y B + 3×s B .The calculated intercept (α) can serve as an estimation for the value of y B .The unweighted least-squares method relies on the assumption that every point on the plot-including the point representing the blank or background-exhibits a variation that follows a normal distribution.This variation occurs only in the y-direction and its standard deviation is estimated using S y/x .Thus, it is suitable to substitute S y/x for s B , when estimating the limit of detection.From previous calculations S y/x = 0.000918689 and y B ≈ α = 0.0017.Thus, LOD = 0.0017 + (3 × 0.000918689) = 0.00446 μg⋅L −1 . Note: It is feasible to conduct multiple repetitions of the blank experiment to acquire independent values for s B .These two approaches for estimating LOD should exhibit negligible differences. For the standard additions method, the concentration of the analyte in the test sample can be calculated from the ratio of the intercept (α) and the slope (b) of the regression line (since this is extrapolated to the point on the x-axis at which y = 0).As the concentration of the sample cannot be predicted solely based on a single measured value (multiple standard additions are required) the error of the extrapolated x E value is provided by: The respective confidence limits can be calculated as x E ± t (p − 2) × s xE (see worked example below). Comparison of analytical methods Typically, when comparing two methods (e.g., reference and new method, Table 5) across various levels of analyte concentrations, it is customary to follow the procedure depicted in Fig. 7. If each sample produces the same outcome with both analytical protocols, the regression line depicted will have an intercept of zero, a slope of 1, and a correlation coefficient of 1.Typically, we aim to examine whether the intercept significantly deviates from zero and whether the slope significantly deviates from 1. Confidence limits for the values of constant and intercept are calculated, usually at a significance level of 95%. Based on the calculation of the regression parameters (as in the previous worked example) the intercept is determined to be 0.103923647, with upper and lower confidence limits of − 0.066613699 and + 0.274460993, respectively, encompassing the desired value of zero.Similarly, the slope is 0.955604606244823, with a 95% confidence interval that falls within 0.880924897601987 − 1.03028431488766 including the value of 1. Note: The focus is more on establishing the range within which the slope and intercept values are likely to fall, rather than solely relying on the correlation coefficient.The method that offers higher precision is represented on the x-axis of the graph while we assume that the error in the y-values remains constant (homoscedasticity). Bland-Altman plots serve as a valuable graphical tool, especially in clinical analysis to compare and evaluate the agreement between two sets of data obtained from different measurement techniques.Illustrated in Fig. 8, these plots visually depict the difference between two measurements on the y-axis and the average of those measurements on the x-axis.Additionally, a horizontal line is incorporated to represent the mean difference between the two measurements.Also, these plots, as already mentioned feature lines-the limits of agreement (LOA)-indicating the standard deviation, typically ± 1.96 × standard deviation of the differences (sd d ) from the mean difference (m d ).This enables the identification of any potential outliers within the data [24]. From the data of Table 5, sd d = 0.101, so the 95% of differences will be: a) for − 1. Conclusions Researchers need to realize the limits and capabilities of conventional statistics and they should bring into their chemical analysis elements of scientific judgement about the plausibility of statistics.This review focuses, mainly, on the regression and correlation to find connections between two variables, measure the connections, and to make predictions of analyte concentrations in a proper way.By way of a practical example, the Excel software package can easily generate a large number of statistics in a form which is digestible and easily applicable.The tutorial review benefits researchers and authors embarking on studies handling analytical measurements. Fig. 1 Fig. 1 Leverage due to uneven distribution of the calibration standards Fig. 3 Fig. 4 A Fig. 3 Calibration data set in standard plots with linear axes showing response plotted against A concentration and B logarithm of concentration.Graph B is often incorrectly used to prove linear dependence between analytical response and concentration Fig. 6 Fig. 6 Plot of residuals vs Cd concentration.No evidence of outliers is observed.Dashed lines represent the ± t (0.05, p-2) × S y/x . Fig. 7 Fig. 7 Comparison of two analytical methods with regression analysis Table 1 Calibration data for Cd determination in natural water Table 2 Output of Excel regression d ).Broad LOA or values that consistently fall outside these bounds is an indication of a lack of agreement between the two methods (see the "Practical example" and "Comparison of analytical methods" section). Table 3 Responses and residuals from data of Table1after OLS regression a Can be provided by statistical packages like Excel (it is crucial to make sure that an adequate number of significant figures is employed) b S y/x = Residual standard deviation or S RES , can be also found from Output of Excel Regression -1 st Table: indicated as "Standard error" Table 4 Data treatment for the calculation of the terms SS PE and MS PE Table 5 Data obtained by two analytical methods
2024-01-10T06:18:04.048Z
2024-01-09T00:00:00.000
{ "year": 2024, "sha1": "54c0cd1d3d96490a5d67b6f6c9d3db0d93f7f3df", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00604-023-06157-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "77dc5ce65023f2cdad2264cfe6aa627bd8b9d78b", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
204256698
pes2o/s2orc
v3-fos-license
Rain Water Harvesting of Sant Gadge Baba Amravati University campus in Amravati - A Case Study : People usually make complaints about the lack of water. During the monsoons lots of water goes waste into the gutters. And this is when Rain water Harvesting proves to be the most effective way to conserve water. We can collect the rain water into the tanks and prevent it from flowing into drains and being wasted. It is practiced on the large scale in the metropolitan cities. Rain water harvesting comprises of storage of water and water recharging through the technical process. The Rainwater harvesting is the simple collection or storing of water through scientific techniques from the areas where the rain falls. It involves utilization of rain water for the domestic or the agricultural purpose. The method of rain water harvesting has been into practice since ancient times. It is as far the best possible way to conserve water and awaken the society towards the importance of water. The method is simple and cost effective too. It is especially beneficial in the areas, which faces the scarcity of water. This paper presents rainwater harvesting as case study Of Sant Gadge Baba Amravati University, Amravati, Maharashtra, which proves to be the effective and promising way of supplying rain water to all the university campus throughout the year. Alluvium : About 20% part of the Amravati taluka along the Pedhi basin is occupied by alluvial deposits. The alluvium consists of clay, sand and silts with thickness ranging from 10 to 15 m with a wide aerial extension spread over 184 sq. kms. It is of recent age, and lying over the Deccan Traps. D. Rainfall The rainfall in this area is moderate generally starting from mid-June and continuing up to October. The average annual rainfall in the area is 782.00 mm, most of which is received during South-West monsoon. The recharging of bore well takes place for up to 4 months only. The average annual rainfall for the last ten years when compared with the normal annual rainfall, it is observed that the average annual rainfall for the last ten years of the district is much less than the normal annual rainfall. Thus the rainfall has definitely decreased in the district over the period of time. E. Hydrogeology The total area of the University is about 477 acres. The soil mainly consists of basalt stone. Due to presence of basalt stone, the absorption capacity of the soil is very less. In Amravati taluka, Ground water occurs in upper weathered and fractured parts of Deccan Trap Basalt mostly down to 15-20 m depth. At places potential zones are encountered at deeper levels in the form of fractures and inter-flow zones. The upper weathered and fractured parts form phreatic aquifer and ground water occurs under water table in unconfined conditions. At deeper levels, the ground water occurs under semiconfined conditions. The Pohra and Ner Pinglai hills and rugged Basalt terrain does not form potential aquifer due to limited thickness of weathered material. II. RAIN WATER HARVESTING Rain water harvesting is the technique of collection and storage of rain water at surface or in sub-surface aquifers, before it is lost as surface run-off. The augmented resource can be harvested in the time of need. Artificial recharge to ground water is a process by which the ground water reservoir is increased at rate exceeding that under natural conditions of replenishment. An old technology is gaining popularity in a new way. Rain water harvesting is enjoying a renaissance of sorts in the world, but it traces its history to biblical times. A. Design Considerations The important aspects to be looked into for designing a rainwater harvesting system to increase ground water resources are: -1) Hydrogeology of the area including nature and extent of aquifer, soil cover, topography, depth to water level and chemical quality of ground water. 2) The availability of source water, one of the prime requisite for ground water recharge, basically assessed in terms of noncommitted surplus monsoon runoff. A. Rain Water Harvesting through Percolation Tank Percolation tank is an artificially created surface water body, submerging in its reservoir a highly permeable land so that surface runoff is made to percolate and recharge the ground water storage. Percolation tank should be constructed preferably on second to third order streams, located on highly fractured and weathered rocks which have lateral continuity downstream. The recharged area downstream should have sufficient number of wells and cultivable land to benefit from the augmented ground water. The purpose of the percolation tanks is to recharge the ground water storage and hence seepage below the seat of the bed is permissible. For dam's up to 4.5 m height, cut off trenches are not necessary and keying and benching between the dam seat and the natural ground is sufficient. B. Roof Top Rainwater Harvesting through Recharge Pit Recharge pits are small pits of any shape rectangular, square or circular, constructed with brick or stone masonry wall with weep hole at regular intervals, to of pit can be covered with perforated covers. Bottom of pit should be filled with filter media. C. Roof Top Rain Water Harvesting through Recharge Trench Recharge trench as shown in figure 2 is provided where upper impervious layer of soil is shallow. It is trench excavated on the ground and refilled with porous media like pebbles, boulder or brickbats. It is usually made for harvesting the surface runoff. Bore wells can also be provided inside the trench as recharge shafts to enhance percolation. The length of the trench is decided as per the amount of runoff expected. This method is suitable for small houses, playgrounds, parks and roadside drains. D. Roof Top Rain Water Harvesting through Existing Tubewells In areas where the shallow aquifers have dried up and existing tubewells are tapping deeper aquifer, roof top rain water harvesting through existing tubewell can be adopted to recharge the deeper aquifers. PVC pipes of 10 cm diameter are connected to roof drains to collect rain water. The first roof runoff is let off through the bottom of drain pipe. After closing the bottom pipe, the rain water of subsequent rain showers is taken through a T to an online PVC filter. The filter may be provided before water enters the tube-well. The filter is 1-1.2 m in length and is made up of PVC pipe. Its diameter should vary depending on the area of roof, 15 cm if roof area is less than 150 sq. m and 20 cm if the roof area is more. The filter is provided with a reducer of 6.25 cm on both the sides. Filter is divided into three chambers by PVC screens so that filter material is not mixed up. The first chamber is filled up with gravel (6-10 mm), middle chamber with pebbles (12-20 mm) and last chamber with bigger pebbles (20-40 mm). If the roof area is more, a filter pit may be provided. Rain water from roofs is taken to collection/desilting chambers located on ground. These collection chambers are interconnected as well as connected to the filter pit through pipes having a slope of 1: 15. The filter pit may vary in shape and size depending upon available run off and are back-filled with graded material, boulder at the bottom, gravel in the middle and sand at the top with varying thickness (0.30-0.50 m) and may be separated by screen. The pit is divided into two chambers, filter material is one chamber and other chamber is kept empty on accommodate excess filtered water and to monitor the quality of filtrated water. A connecting pipe with recharge well is provided at the bottom of the pit for recharging of filtered water through well. E. Recharging of bore wells: Rainwater collected from rooftop of the building is diverted through drainpipes to settlement or filtration tank. After settlement filtered water is diverted to bore wells to recharge deep aquifers. Abandoned bore wells can also be seed for recharge. Optimum capacity of settlement tank/filtration tank can be designed on the basis of area of catchment, intensity of rainfall and recharge rate as discussed in design parameters. While recharging, entry of floating matter and silt should be restricted because it may clog the recharge structure first one or two shower should be flushed out through rain separator to avoid contamination. This is very important, and all care should be taken to ensure that this has been done. F. Rain Water Harvesting through Dugwell Recharge: Existing and abandoned dug wells may be utilized as recharge structure after cleaning and desilting the same. The recharge water is guided through a pipe from desilting chamber to the bottom of well or below the water level to avoid scouring of bottom and entrapment of air bubbles in the aquifer. Recharge water should be silt free and for removing the silt contents, the runoff water should pass either through a desilting chamber or filter chamber. Periodic chlorination should be done for controlling the bacteriological contaminations. Rainwater from the rooftop is diverted to dug wells after passing it through filtration bed. Cleaning and desilting of dug well should be done regularly to enhance the recharge rate. The filtration method suggested for bore well recharging could be used. IV. GROUNDWATER YIELD UTILIZATION Ground water is the only source of water for various uses in the campus. There are 17 bore wells out of which 8 are very high yielding. The yield of the wells is ranging between 2000 to 150000 liters. The total quality of water stored in 1-year ranges from about 8-9 lakh liters. Details of existing water sources due to rain water harvesting in Amravati University Campus was described in table no.1 and the quantity of water yield from various storages was described in table no. 2. The total depth of bore well ranges from 44 m to 136 m while the diameter of bore wells ranges from 0.1 m to 6 m. the bore well near Zoology Building can supply water for up to 16 hours per day. The ground water is utilized in various building like hostel residential quarters, canteen and many more. The water is available in such a plenty that there is no need of purchasing the water from government. A. Depth of water level In the area the ground water is ranging between 5 to 10 m below the ground level. The water level in bore well was observed at 10m below the ground level. In bore well, water level was observed at 17m. below ground level in non-pumping state. The area is situated at lowest point according to gradient. If the gradient is considered as zero, the highest gradient available here is 18m. University has incurred an amount (Grant) of Rs. 12 lakhs on the project of rain water harvesting are continuing the expenditure, University has also implementing surface rain water harvesting So they constructed Kolhapuri Bandhara, village tank, Percolation tank. The topography of university is such that they have made use of natural rain fall on surface of Ground. They also constructed swimming tank, water for swimming tank is taken from Bore Well. At Present University is Supplying water for Girls Hostel Building in Campus, boys hostel canteen Building, Rest House swimming Tank etc. University has no water supply from out-side source. They are using water collected by rain water harvesting. V. CONCLUSION In Sant Gadge Baba Amravati University, Amravati, Maharashtra rain water harvesting project is really very beneficial and fruitful. It fulfills all their demands on this water only. There is no any other supply than this ground water source. By Roof-top and surface Rain water harvesting there are a lot of environmental benefit. Rain water harvesting is very important which helps to improve the quality of ground water, reduce the soil erosion and reduce the ground water pollution. It increases the ground water table. Rain water harvesting is an effective economical concept for recharge of ground water. The main objective of Roof Rain water harvesting is to make water available for future use. Capturing and storing rain water for use is particularly important in dry land, hilly and urban area.
2019-09-19T09:12:58.848Z
2019-05-31T00:00:00.000
{ "year": 2019, "sha1": "4a658cf40ff8b44f596da49960377d6c49012b2b", "oa_license": null, "oa_url": "https://doi.org/10.22214/ijraset.2019.5046", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8e54a007627d2e78a05719ccbb68e43d0bdbff52", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Geography" ] }
41304233
pes2o/s2orc
v3-fos-license
A aplicação do método do arco da problematização na coleta de dados em pesquisa de enfermagem: relato de experiência Estudio desarrollado con 152 trabajadores de un hospital psiquiatrico de la red estatal del Parana, Brasil, en 2008. Objetivo: relatar la experiencia de la aplicacion del Metodo del Arco en la colecta de datos de una investigacion en Enfermeria. Los datos utilizados para la descripcion de la experiencia se obtuvieron mediante la aplicacion del Metodo del Arco. El desarrollo de las etapas de este metodo requiere analisis juicioso y reflexion del objeto de estudio y un planeamiento bien delineado para que los resultados sean alcanzados.La experiencia de aplicar el Metodo en la investigacion para construccion de un marco de referencia para el cuidado en salud mental permitio la reflexion-accion- reflexion a partir de las experiencias del cotidiano de trabajo de los sujetos, asi como presentar la oportunidad de recoger los datos en la investigacion. Contribuyo para la humanizacion del cuidado prestado y movilizo los envueltos para un aprendizaje significativo de la realidad de forma dinamica y compleja. INTRODUCTION The human being possesses the ability to interact with the object of knowledge, with the phenomena present in their reality and to establish social relationships.][3] Thus, researchers, by resorting to educational practices, in conjunction with research, should choose a method of working that guides them, leading to satisfactory results, and that is consistent with the proposed objectives.][5] This Methodology entails the active participation of the subjects, considers the context of their lives, their history and experiences, respecting the rhythm of learning of each.][5][6] Based in humanism, Problematization Methodology recognizes the man and the human values above all other values.The phenomenology adopts the basic postulate of the notion of intentionality of the human consciousness, by affirming that the object only exists for the subject who gives it significance and that the consciousness of the object is progressively revealed, and never ends, becoming an exhaustive exploration of the world Existentialism uses the belief that man constructs himself and may be the subject.When integrated into its context, the subjects reflect and commit to the context in the pursuit of realization of the work of conscientization, through the process of raising critical consciousness of the reality that is progressively unveiled.Finally, Marxism uses the concept of praxis as a transforming activity, by making it possible to move from the theory to the practice, conscious between thought and intentionally realized action. 5Consisting of a set of methods, techniques, procedures or activities intentionally selected and organized, Problematization Methodology has the primary purpose of "preparing the [...] human being to become aware of their world and to also act intentionally to transform it". 4:10 Thus, man starts to be active in the transforming process of the world and society in order to improve the quality of human life. 3,67 Therefore, this method has as its starting point the reality of the subject, the scenario where the subject belongs and where the various problems can be seen, perceived or deduced, so they can be studied together or in pairs.The observation of the reality depends on the worldview and the life experiences of each person, and may differ from one observer to the other. 6he work scheme constructed by Charles Maguerez, called the Arch Method, 6 has been widely used by professionals of the healthcare area, including those of nursing.[9][10][11][12][13][14][15][16][17][18] The Arch Method has as its starting point the observation of reality, in a broad, attentive way, in which it is sought to identify what needs to be worked, investigated, corrected, perfected.From the aspects verified, problems are selected to be studied.The second step is to identify the key points, when what will be studied in respect of the problem is defined.The theorization, the third step, consists of the thorough investigation of the defined key points.It is in this step that reading of research and studies is encouraged in order to seek clarification of the situationproblem. 6,15fter the theoretical deepening, with analysis and discussion of the problem, the elaboration of assumptions or hypotheses of solution is proceeded to.In the fourth step, the participants use their creativity to make changes in the observed context.The fifth step is the application to the reality, in which the viable solutions are implemented and applied for the purpose of transformation, however small, in that portion of the reality. 3,4,6rille DC, Brusamarello T, Paes MR, Mazza VA, Lacerda MR, Maftum MA While the Problematization Methodology and the Arch Method are widely used in nursing studies, [7][8][9][10][11][12][13][14][15][16][17] one must consider that the development of all its steps, and the inclusion of the subject as a participant, make its application a complex and difficult task.This occurs for diverse reasons, such as the fact that the majority of healthcare professionals have their academic formation based on the model of transmitter pedagogy.To work with this Methodology, internal flexibility and willingness are necessary to establish dialogue with the subjects, and to put oneself in the position of mediator, facilitator of learning, considering the various points of views and knowledge of each person.It also highlights that the facilitator and/ or researcher has theoretical depth regarding the content to be problematized, as well as clarity concerning the method to be used.Given the above, the aim of this study is to report the experience of applying the Arch Method of Problematization in the data collection of a nursing study. METHODOLOGY This study is a report of the experience of using the Arch Method in a qualitative study of healthcare practice, developed in 2008, in a hospital specializing in Psychiatry, of integral hospitalization, of the state public network of Paraná. The institution in which the study was developed reported the need to implement a program of ongoing education for its employees, in the form of training, and therefore, requested the support of the Department of Nursing of the Federal University of Paraná (UFPR) regarding this demand.With this, the opportunity was created of developing the proposals of sensitizing the nursing team of this institution, regarding mental healthcare.However, the institution recommended that this training be extended to all employees of that hospital -physicians, psychologists, occupational therapists, nutritionists, social workers, physiotherapists, including those who held administrative, technical and operational positions.This is because it was considered important that all the workers have a more comprehensive view of the nursing work.This approach is consistent with what is currently recommended by the public policies in mental health, which deal with more integration between the mental health professionals, with emphasis on the interdisciplinarity of the care.Thus, this study involved 152 workers of diverse professional categories, positions and functions, according to the rules of the institution (Table 1).The majority of the participants in this study were female, aged between 41 and 60 years.The length of time since graduation and of practice in mental health is proportional and there was a prevalence of 69 (45.39%) of the participants who had worked in the institution, the field of study, for more than 21 years, 65 (42.76%) for between five months and 20 years and 18 (11.85%)who did not provide the information. Nurse The reported information becomes important and justifies the need to provide time for reflection and discussion about the practice of mental healthcare, considering that some of the major changes in the legislation and national policy for mental healthcare have occurred in the last 15 years The data were collected using the Arch Method of Maguerez, in four meetings for each of the eight groups of employees of the institution, totaling 32 meetings, with the following organization: Groups A. The meetings occurred during the work period; therefore the division of groups occurred, in order to repeat each meeting twice, for each shift, due to the timetable being 12 hours work and 36 hours of rest.The service is organized into four shifts: two day shifts and two night shifts.Thus, the workers were divided into two groups per shift, in which, some maintained the care activities with the patients, and the others participated in the meeting. The ethical aspects were safeguarded through the formal consent of the hospital and the Terms of Free Prior Informed Consent, observing the legal and ethical standards for scientific research that involves human beings. 19The project was approved by the Research Ethics Committee of the Health Science Sector of the UFPR (CAAE 2035.0.000.091.0). Observation of the reality and elaboration of the situation-problem -1 st Step of the Arch Method At the first meeting the aims and methodology of the work were presented.For the initial step of the Arch Method, which consists of the observation of the reality, audiovisual resources were used to debate with the participants about the development of the care by the team, to whom the care is developed, and what it is like working in a team. At that time, the participants reported that they felt the need for the discussion of concepts that sustain the mental healthcare and thus identified the situation-problem, namely, the construction of a benchmark for the care of the psychiatric patient of the institution in which they work. Definition of the key points -2 nd Step of the Arch Method In this step, the key points to be studied and discussed, which would support the resolution of the situation-problem, were identified.The participants considered the following concepts relevant to their practice: nursing, the human being, health-disease, the environment, the team and the interpersonal relationship. The choice of concepts, performed in a shared way with the team and supported by a theoretical framework, allows reflection on the professional practice, as well as the conscious use of a theoretical framework and thus a critical and reflective practice.Thus, a benchmark is a relationship of concepts that are intertwined and through this mutuality create a correlation of meanings and values for a given professional practice, having the aim of supporting the nursing care. Theorization -3 rd step of the Arch Method In the third step, the discussion of the concepts chosen by the subjects occurred: nursing, the human being, health-disease, the environment, the team and the interpersonal relationship.Due to the plurality of the composition of the subjects in the meetings, to discuss the concept of nursing, the incomplete statement for me nursing is... was used for the nursing professionals, and I perceive the work of nursing as... for the other participants.The other concepts listed by the group, as inherent to the practice of the mental health workers, were problematized in a singular way: for me the human being is..., for me health and disease are..., for me the team is..., for me the environment is..., for me the interpersonal relationship is.... Three groups were formed and everyone was given note paper, with different colors (green, yellow and pink) to facilitate the mediation of the activities and the organization of the research data.The groups whose members were from the nursing team received the green paper, those composed of technicians/professionals (psychologists, doctors, social workers, occupational therapists, pharmacists), the yellow paper, and the other staff (telephonist, maintenance agent, kitchen and general services, administrative assistants), the pink paper. This step was developed at individual and group times.In the individual time, it was asked that they reflect on the reality that they experience in the quotidian of their work in mental health in the institution and that they complete the statement: for me nursing is...., and I perceive the work of nursing as...This strategy was repeated until the discussion of the last concept.Thus, each participant expressed their experience regarding the concepts through their writing, with some complementing this with drawings. In the sequence, the group discussion was carried out of the key points (concepts) elected by the participants in the second stage of the Arch Method.For the development of the activities, the participants who had the same color paper grouped together.When the number in the group exceeded eight, a division was suggested into two or more groups to allow better participation, valorization, and sharing of ideas in the discussion of each concept/key point. After the formation of the groups, the members shared between each other the content they had individually recorded and as a result, formulated a concept that represented the idea of all that was written as a poster.After finalizing this activity, each group presented the constructed concept to the other groups.Regarding the way the presentation was carried out, the group fixed the poster on a line of string that circulated the walls of the room.The posters were numbered according to professional category/position and occupation of the participants.This dynamic was maintained for the other constructions, relating each concept with the mental healthcare developed by the workers of the institution, and seeking to valorize the experience of each participant. After the construction in the group, each concept was theorized considering the Theory of Interpersonal Relationships. 22After each meeting, the posters were withdrawn and the papers of individual production picked up, analyzed, and the central ideas grouped to be discussed and validated by the groups that participated in the subsequent meetings. Elaboration of solution assumptions -4 th Step of the Arch Method In this step the solution assumption was prepared, which is the proposed construction of the benchmark to support the mental healthcare of the team.Respecting the reality and the conditions of the institution, previously described, seeking the feasibility of the assumption, the practice of the employees of the institution was problematized in light of the reference of Joyce Travelbee 22 , which describes nursing as an interpersonal process, through which the nurse helps a person, family or community, aiming to promote health, to prevent or cope with the experience of illness and mental suffering.To this end, the nurse needs to create a social, biological, psychological, cultural and physical environment conducive to reciprocal relationships, through which every human being can learn.The human being is unique and insubstitutable, similar and, at the same time different, in relation to the other person.Therefore, each should be valorized and respected in their individuality.The team consists of members from diverse health disciplines, who can share and perform the therapeutic relationship, in order to help the person reintegrate into society. Regarding the concept of health, Travelbee considers "something the person is, how they demonstrate certain behavior and attitudes", 22:7 and these attitudes are related to the ability to love, to cope with reality and to discover a purpose or meaning in life.The experience of illness helps human beings to grow and strengthen, thus recognizing their limitations and potential.In this sense, the interactions should be planned with a view to care that enables the human being to comprehend, cope and deal with situations of disease or to live with the limitations imposed by them. Application to reality -5 th step of the Arch Method To develop the activities of the last step of the Arch Method, all the posters that contained the concepts constructed by the groups in the previous meetings were hung on the line.An order was maintained, where, for example, all the nursing concepts were together, and so on, similarly, with the others.When the participants The application of the arch of problematization method in the data... arrived at the site of the meetings, they expressed surprise regarding the quantity of material produced by them, as well as satisfaction in reading what their colleagues from other groups and shifts had produced. The participants were asked to circulate the room and read the concepts expressed in the posters and told that they should start with those of nursing, continuing in the order in which they were discussed and constructed during the meetings.Next, the concept, preprepared by the researcher from the central ideas of all the posters, was presented.They were asked to read and validate the idea, as it was expressed in the set of concepts contained in the posters.There was intense participation.Inclusions, substitutions and exclusions of terms were suggested, so that in the final step, we obtained a concept that expressed the opinion of that group. Because this process was repeated with the other groups, the modifications in the concepts were highlighted in different colors to be validated by the subsequent groups.Each group repeated the process described above and first presented the concept previously developed by the researcher and accepted the collaborations of the group and, while progressing, re-presented the concept with the alterations suggested by the group that preceded it.Thus, at the end of the meeting, we had the concept of that group, considering the contributions of the previous group, culminating in the final concept after all the groups had gone through the same process. This activity was then developed in each group again, in light of the reference of Joyce Travelbee, which generated new discussions, suggestions for inclusions, substitutions and exclusions regarding the concepts and even reflections of the practice of mental healthcare, culminating in the final concept of the group.This process was repeated with the concepts of nursing, the human being, health-disease, the environment, the team and the interpersonal relationship, resulting in the benchmark. Summary of the trajectory followed The experience of applying the Arch Method in the collection of data for the construction a benchmark for mental healthcare, allowed us to reaffirm its importance in the realization of this proposal.It should be noted that due to the dialogical-problematizing character intrinsic to Problematization Methodology, in which everyone involved in the teaching-learning process makes efforts and provides the possibility of comprehending and overcoming situations that are part of the study object, the construction that occurred through the collective experience was one of construction for both subjects and researcher equally. ][9][10][11][12][13][14] However, the study site provided the opportunity to cover, in addition to the nursing professsionals, other professionals and workers directly or indirectly linked to the work of the nursing team, which provided the conditions in which various views were described in the problematization.Another aspect considered in this study and corroborated by others, [7][8][9][10][11][12][13][14][15][16][17] consists of the significant learning characterized by the construction, the need for transformation and reconstruction of knowledge, in a move that the naive consciousness becomes critical and provides recognition of the professional in their participation in the healthcare.Therefore, as a result, the subjects gained with the appropriation of new knowledge, and the investigator/ mediator deepened their knowledge of the study object, with regard to their meanings and what they came to signify for the subjects regarding how to teach. 1 However, this study was limited by the impossibility of evaluatating the influence of the application of the concepts in the practice of the subjects, since the last step concerns the implementation of the concepts constructed in the reality without providing conditions for the evaluation. To exemplify the application, in particular the description of the steps, in accordance with the proposal of Maguerez, we chose to make the following graphical representation organized by the authors and adapted for this article.Figure 1 shows the trajectory followed in the collection of research data, using the steps of the Arch through Problematization, with the workers of the institution, as the field of this study. FINAL CONSIDERATIONS The Problematization Methodology supported the construction of the educational-reflective process, which contributed to the humanization of the care, from the significant experiences of the participants in the quotidian of the institution, as well as presenting an opportunity to collect research data.To reflect on the experience implies focusing attention on this Methodology, due to its contribution to the process of knowledge construction and due to providing the reflection-action--reflection about the practice of mental healthcare. In the quotidian of the services, potential spaces for renewal, discussion and reflection on the practice in mental health have to be constructed, which provide the possibility to share information among the employees, the use of creativity, spontaneity, construction/deconstruction of new/ old utopias in the practice of the the workers, for the advancement and consolidation of this new model of mental healthcare. In the construction of the concepts that comprise the benchmark, the view emerged during the discussions that some workers posses the image and identity of nursing professionals.Thus, this study provided an opportunity to reflect and discuss with the team the concept they have of the extent of the imagination of the society regarding the profession, particularly in the area of mental health. The dialogue established in the meetings evidenced the thoughts of the team concerning the context in which they are inserted, the relationship with the person with a mental disorder, with the institution, with the colleagues and with oneself.It also allowed both the researcher and the participants, to aim for the meaningful learning of the reality of a dynamic and complex form. their position (others) Figure 1 - Figure 1 -Schematic representation of the trajectory followed in the use of the Arch Method for collecting the research dataSource: Adapted from Bordenave and Pereira.6 1, A.2, B.1 and B.2, for the day shift workers, and Groups C.1, C.2, D.1 and D.2, for those of the night shift.The meetings were recorded on audiotape and supplemented by records of the accounts and non-verbal expressions of the subjects in the field diary of the researcher.
2017-10-22T13:45:52.612Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "4d1c431e3b4baabc102d57259946b9f610022bfb", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/tce/a/kjrHKpfCbFdBbr3wdztzJKn/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f1d41c30d947337c1c93fb0e938906d3453c4b3a", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Philosophy" ] }
265354210
pes2o/s2orc
v3-fos-license
Low-input-power Sub-GHz RF Energy Harvester for Powering Ultra-low-power Devices Using RF energy to power ultra-low-power devices is an appropriate solution to reduce the dependency on conventional sources (e.g. batteries); however, providing a regulated DC voltage from low RF power is a challenging task. This work presents a low-input-power RF harvester designed with off-the-shelf components and composed of an RF rectifier and a power management integrated circuit (PMIC). In the rectifier, an inductive-matching technique is employed which consists of an inductive branch composed of a lumped inductor together with a short-circuited stub due to sub-GHz frequency. The rectifier is designed to present an optimal DC load in function of the PMIC operation at low powers. Measured RF-to-DC conversion efficiency of 32% and DC output voltage of 186 mV with an input power of −20 dBm at 888.7 MHz are achieved in the rectifier. Measured peak efficiency is 52% at −4 dBm. At −20 dBm, a relatively high efficiency and output DC voltage are obtained compared to other discrete rectifiers. For this power level, this performance is sufficient to maintain the operation of the PMIC that delivers a power of 324 nW aiming to supply an ultra-low-power wake-up receiver. The required RF power to deliver a regulated DC voltage is so far the lowest reported in the literature for an RF harvester using off-the-shelf components and a continuous wave (CW) signal to date. I. INTRODUCTION In a context of an increasing deployment of the Internet of Things (IoT), the long term operation of these devices is limited by their batteries.Duty cycled IoT consume high energy due to the idle listening of their transceivers.The use of wake-up radio receivers (WURx), driving the IoT on demand, reduces the required energy since most of the time the main node is in a deep sleep state [1].Moreover, making the WURx independent of the main battery and powering it with harvested RF energy will extend the battery lifetime [2].The targeted application is depicted in Fig. 1.One of the main goals to power a WURx with RF energy is to efficiently transform this fluctuating kind of power into a regulated DC voltage.The RF energy harvester is composed of a rectification circuit and a power management integrated circuit (PMIC). In [3] an architecture of a rectification circuit has been presented in simulation.This architecture is capable of powering sub-µW devices with a regulated DC voltage from a wide RF power level range.Depending in the input RF power level, the RF signal received is rectified by a power dependent part of the the circuit.The obtained DC voltage is then regulated by a PMIC.For RF harvesting from the environment, except when being close to a base station (e.g., a mobile phone base station), RF power density is very low.When transmitting RF power from a dedicated wireless RF energy source, increasing the distance of transmission is necessary.However, regulations limit the amount of RF power to be transmitted.In both scenarios, a low RF power is expected at the input of the RF harvester circuit. In this paper, a low-power RF harvester is demonstrated.A rectifier is implemented and optimised at low-input-power using the inductive-matching technique [4].Minimal requirements of a commercial TI BQ25570 PMIC are taken into account in the design of the rectifier.Measured RF-to-DC conversion efficiency of 32 % and DC output voltage of 186 mV with an input RF power of −20 dBm at 888.7 MHz are achieved in the rectifier.For this input power level, a relatively high voltage, compared to other rectifiers fabricated with off-the-shelf components, is obtained while reaching good efficiencies.This performance is sufficient to maintain the operation of the PMIC that regulates the DC output voltage of the rectifier and delivers a power of 324 nW aiming to power an ultra-low-power WURx which power consumption is estimated to be around the tens to hundreds of nW [5]. II. RF ENERGY HARVESTER DESIGN The RF energy harvester is composed of an RF rectifier and a PMIC as depicted in Fig. 1.Because of its low quiescent current, the commercial BQ25570 PMIC from TI is chosen.This PMIC offers three main functions.A maximum power point tracking (MPPT) algorithm, the distribution of energy between a storage element and its load and the regulation of the output voltage.All the functionalities are enabled when the PMIC is in the normal mode state.To study the characteristics of the PMIC connected with a rectifier, the latter is modeled as a voltage source with its internal resistor R int as in [6]. This model is used in [3] to measure the impact of R int in the minimal input power and voltage required by the PMIC to output a regulated voltage and hundreds of nW.The authors suggest that R int , which evidently is the output load of the rectifier in the model, must be higher than 10 kΩ to limit the quiescent power of the PMIC.Higher loads than 10 kΩ do not reduce substantially the power absorbed by the PMIC.As a result, in the present work, the rectifier is designed with an optimal load higher than 10 kΩ.In [4] a shorted transmission line (TL) which acts as an inductance is used in series with the diode to compensate its capacitive behaviour in a shunt topology.The advantage of the approach is to reduce ohmic losses in the input matching network as much as possible.The rectifier reported in [4] is at 2.45 GHz, resulting in a reasonable shorted TL length.In the present work, a similar principle is used; however, the rectifier is designed at a frequency of 889 MHz, leading to a significantly long shorted TL length which reduces the quality factor (Q factor), consequently reducing the RF-to-DC conversion efficiency. To reduce the length of the shorted TL, a high Q-factor inductance, L 1 , is used in combination with T L 1 as showed in Fig. 2. Using Keysight ADS, the values of the required components are determined in simulation.A BAT15 Infineon diode, D 1 , is used due to its low forward threshold voltage and low series resistance which reduce the losses at low power.A 50 Ω TL, T L 2 , is placed for the connector solely.In simulation, the use of an ideal inductance of 128 nH leads to good matching between the rectifier and a 50 Ω RF power source and boosts the efficiency at −20 dBm when the rectifier is loaded with a 17 kΩ resistor.This 128 nH inductance is achieved with a 2712sp-27N Coilcraft coil and a shorted 14 mm TL.The value of the coil inductance is selected making a trade-off between which section of the inductive branch introduces higher losses.Indeed, too high inductance coils have a reduced Q-factor and too long lengths of T L 1 become preponderant in terms of losses at a certain length value. The rectifier is implemented on a 0.7 mm-thickness Rogers RO4350 substrate, an RF choke is used to block the fundamental and harmonic frequencies from the load, and finally a 1 nF capacitor, C 1 , is used for smoothing the output DC voltage of the rectifier.For an input power of −20 dBm at a frequency of 889 MHz, an efficiency of 37 % and an output DC voltage of 252 mV are obtained in simulation when the rectifier is loaded with a 17 kΩ resistor, R 1 , which is suitable for the PMIC. III. MEASUREMENT RESULTS The prototype of the proposed energy harvester is shown in Fig. 3.The width of the RF rectifier is 0.019λ and its length is 0.062λ.S 11 is measured using an R&S ZVA 50 vector network analyser. As shown in Fig. 4, measurement results are close to simulation and a |S 11 | of −28 dB at 888.7 MHz is obtained.Varying T L 1 , the operation frequency can be optimized to 868 MHz or 915 MHz which are common ISM bands.For measuring the RF-to-DC conversion efficiency, the rectifier is fed with an 888.7-MHzCW signal using an R&S SMBV100A vector signal generator.A Krytar directional coupler and an RF power meter with a power sensor are employed to acquire the exact input power to the rectifier. First, using a Yokogawa GS610 source measurement unit, the load of the rectifier is varied at input power of −20 dBm to find the optimal experimental load at this operating point as showed in Fig. 5a.As aforementioned, the rectifier is modeled as a voltage source.Fig. 5b shows the measured I-V characteristic of the designed rectifier.Indeed, this characteristic corresponds to a voltage source in series with its internal resistor which equation is where I meas and V meas are the measured output current in µA and output voltage in mV of the rectifier, respectively.The inverse of the slope is the experimental optimal load found in Fig. 5a and corresponds to 11 kΩ.Then for a 11 kΩ load, the input power is swept.The DC voltage is measured at the output of the rectifier using a Keithley 195 DC meter.Measurement results are shown in Fig. 6.These results show an efficiency of 32 % and an output voltage, V load , of 186 mV for an input power of −20 dBm.For this power level, a relatively high voltage, compared to other rectifiers fabricated with off-the-shelf components, is obtained while reaching good efficiencies.The peak of efficiency is 52 % at −4 dBm which corresponds to an output voltage of 1.5 V and at −29 dBm, the rectifier has an efficiency of 11 % and an output voltage of 40 mV.At −20 dBm the measured efficiency is 5 % lower than simulations.The value of the experimental optimal load is shifted of 7 kΩ to lower values face to the simulated optimal load.However, the 11 kΩ experimental optimal load obtained is still higher than 10 kΩ and suitable for limiting the quiescent current of the PMIC.Differences between measurement and simulation are observed due to uncertainties introduced by the tolerance values of the lumped inductor, and the diode model, i.e. breakdown voltage in Fig. 6b. Finally, the rectifier is connected to a BQ25570 PMIC.The latter uses a fractional MPPT algorithm to load its source with its optimal load.This algorithm consists in sampling and presenting to the source a fraction of its open circuit voltage.Depending on the source being used, the most appropriate fraction can be settled.To determine the most appropriate open voltage fraction for the designed rectifier, from (1), a rectifier open voltage of 378 mV is obtained when making I meas equal to 0. The half of this open circuit voltage is 189 mV which is almost the voltage that the rectifier delivers at its optimal load.As a consequence, the MPPT of the BQ25570 is fixed to 50 %.The PMIC used is embedded in an evaluation board and the regulated DC voltage is settled to 1.8 V which is the minimum value allowed by the evaluation board without performing major modifications.The estimated consumption of an ultra-low power WURx is around the tens to hundreds of nW [5], as a consequence the PMIC is loaded with an available load of 10 MΩ emulating a consumption of 324 nW.With an input power of −20 dBm at the rectifier, the PMIC is capable of working in its normal mode without disabling its load, and hence, providing a continuous regulated voltage of 1.8 V and an output power of 324 nW. Table I compares the similar works found in the literature.In [7] an end-to-end efficiency (RF rectifier + PMIC) of 40 % is reported at −25 dBm, inferring a 37 % rectifier efficiency; however, these results are obtained in function of predicted input RF powers and the efficiency of the rectifier with a controlled source is not given.An RF harvester with an end- to-end efficiency over 10 % is obtained with a high peak-toaverage power rate (PAPR) signal [8], whereas it is not able to operate with a −20 dBm CW source.A chaotic waveform as an input source of a rectifier is demonstrated in [9].It can achieve effieiency of 38 %.It however leads to the the complex waveform design reducing efficiency of power amplifier in the transmitter side.MMIC rectifier is also reported such as a 130nm CMOS rectifier reported in [2], which has an efficiency of 30 % at −23 dBm input power.High cost MMIC process is expected. IV. CONCLUSIONS A prototype of the RF energy harvester is demonstrated in this paper.An inductive-matching rectifier aimed for low input power is proposed.Measured efficiency of 32 % and a DC voltage of 186 mV are achieved at input power of −20 dBm at 888.7 MHz.These results are sufficient to operate a commercial PMIC under its minimum limits specified on its data-sheet.Under −20 dBm the PMIC delivers a power of 324 nW which is sufficient for the targetted application.The RF input power required to deliver a regulated voltage from an RF power as low as −20 dBm is so far the lowest reported in the literature using off-the-shelf components with a CW signal.This design strategy is being used in an architecture consisting in the association of two rectifiers which increases the RF power range harvesting capability without degrading the performance at -20 dBm. Fig. 1 . Fig. 1.Architecture aiming to reduce the dependency of an IoT node in the main energy source. Fig. 2 . Fig. 2. Schematic of the implemeted inductive-matching rectifier aimed for low RF input power. Fig. 6 . Fig. 6.Simulated and measured results, a) efficiency and b) output voltage, V load at 888.7 MHz with the rectifier loaded with 11 kΩ. TABLE I COMPARISON OF LOW-INPUT-POWER RF ENERGY HARVESTERS Predicted input RF power. 2
2023-11-23T14:27:50.477Z
2023-11-08T00:00:00.000
{ "year": 2023, "sha1": "ce5cd66bab41e4a6bd537c49db839a8c53681466", "oa_license": "CCBY", "oa_url": "https://inria.hal.science/hal-04312943/document", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "d064be542e2ac1e840f1d126ebd8a8cf15591e9d", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
236625523
pes2o/s2orc
v3-fos-license
Fitorremediation of contaminated soil with 2,4-D + picloram in Eastern Amazonia The objective of this work was to evaluate the potential of Urochloa brizantha cv. Marandu and Panicum maximum cv. Mombasa in phytoremediation of soil treated with 2,4-D + picloram herbicide, using Raphanus sativus Crimson Gigante as a bioindicator plant. The experiment was carried out in a greenhouse, in a completely randomized design in two stages. In the first stage the treatments were: cultivation of U. brizantha and P. maximum treated with and without the herbicide dose, with five replications. In the second stage, the treatments consisted of cultivating R. sativus in soil: free of herbicide residue; and soil contaminated with cultivation: prior to U. brizantha; P. maximum; and without previous cultivation of grass, with five replications. The units were treated with the herbicide, individually in pre-emergence, after 15 days the grasses were sown. After 50 days, forages were harvested and segregated in aerial and root parts, analyzing fresh and dry biomass (g) and height (cm). After removing phytoremediation plants, R. sativus was transplanted, evaluating visual phytotoxicity at 5, 10, 15 and 20 days after emergence (DAE) and at 20 DAE, the accumulation of green and dry matter (g), height (cm). The evaluated grasses have phytoremediation characteristics for auxinic herbicides; R. sativus can be used as a bioindicator of the herbicide 2,4-D + picloram; the evaluated period was not enough to fully remove the effects of the herbicide. Introduction Herbicides in the synthetic auxin group have selective action and are often used to control the growth of weeds worldwide (Song, 2014). Of this group, picloram (4-amino 3,5,6 trichloro-2-pyridinecarboxylic acid) and 2,4-D (2,4dichlorophenoxyacetic acid) make up the majority of registered products used in agriculture (Brazil, 2013). 2,4-D acid, from the group of phenoxyacetic acids, is a herbicide known as synthetic auxin and has a high tendency to leach into the soil, due to its low adsorption. Picloram, from the group of pyridinecarboxylic acids, is an auxin-mimicking herbicide, with a long residual effect on the soil, which characterizes a high risk of contamination of the groundwater (Brito et al., 2001;Oliveira Junior, 2011;Santos et al., 2006). The residual effects of the herbicides known as carryover, can be remedied or mitigated through phytoremediation (Andrade et al., 2007;Lambert et al., 2012). The technique consists of using plant species capable of tolerating, filtering, extracting, stabilizing or degrading certain contaminating compounds (Tavares et al., 2013;Vasconcellos et al., 2012). The selection of plant species for the phytoremediation program depends on the plant's own characteristics, such as tolerance to heavy metals, high growth rate, high biomass production, abundant root system and good adaptability to local edaphoclimatic conditions (Oliveira et al., 2007). Based on the aforementioned attributes, forage species have the potential to apply the phytoremediation technique. Several studies have proven its effectiveness in the phytoremediation program, such as Belo et al. (2011) and Carmo et al. (2008). However, in the Northern region of Brazil, studies that determine the action of remedial plants for soil contaminants, with species adapted to regional edaphoclimatic conditions, are scarce. The species Urochoa brizantha cv. Marandu (synonymous Brachiaria brizantha), known regionally as brachiarão or simply marandu grass and Panicum maximum cv. Mombasa, are among the most used forages in the Amazon production system, as they are a viable option for the local edaphoclimatic and management conditions (Camarão & Souza Filho, 2005). Due to the relevance of using these forages in the Amazon region, it is opportune to compare them in the phytoremediation program, for a better understanding of their response to the contaminant. Thus, the present work aims to evaluate the potential U. brizantha cv. Marandu and P. maximum cv. Mombasa, in phytoremediation of soil treated with 2,4-D + picloram herbicide, using Raphanus sativus Crimson Gigante as a bioindicator plant. Methodology The scientific method of the research was through experimental field research, in order to find answers, proof or even new phenomena in relation to the problem questioned in the research, through the creation and production of specific, controlled situations and random samples (Koche, 2011). The experiment was carried out in a greenhouse with a 25% shade screen, located at the Federal University of Pará / Altamira University Campus -Pará, under the coordinate 03 ° 11′40 ″ S 52 ° 12′33 ″ W, from June to September 2017, using a completely randomized design, carried out in two stages. In the first stage, the treatments were composed by the cultivation of U. brizantha cv. Marandu in soil free of herbicide residue (T1); P. maximum cv. Mombasa in soil free of herbicide residue (T2); soil contaminated with U. brizantha cv. Marandu (T3); contaminated soil with P. maximum cv. Mombasa (T4), with five repetitions. The second stage, carried out in order to confirm the potential of phytoremediation species, had as treatments the cultivation of the species Raphanus sativus Crimson Gigante in: soil free of herbicide residue (RT1); soil contaminated with previous cultivation of the species U. brizantha cv. Marandu (RT2); contaminated soil with previous cultivation of the species P. maximum cv. Mombasa (RT3); and contaminated soil without previous cultivation of grass (RT4), with five replicates each treatment. For the constitution of the RT1 treatment, the pots from the T1 and T2 treatments (with herbicide-free soil) from the first phase were used, in order to compose the control of the bioassay. The samples consisted of ravine soil, classified as Yellow Latosol, collected in the 0-20 cm layer and in a place with no history of herbicide application, whose chemical analysis is shown in Table 1. Such samples were submitted to the TFSA drying process (air-dried fine earth) and sieved in a 5 mm mesh. The experimental units consisted of pots with a capacity of 8 dm³ and without holes, in order to prevent the loss of the herbicide by leaching, filled with 9 kg of substrate, which were irrigated by adjusting the humidity to a value close to 80% of the field capacity and fertilized as recommended by Embrapa for forages (Dias Filho, 2012). Then, the experimental units received treatment with the herbicide 2,4-D + picloram. The applications were carried out in pre-emergence, individually on the surface of each vessel, with the aid of a spray bottle. The solution was applied in twice the recommended dose, simulating the volume of 3.5 L of the herbicide product to 200 L of water, this being the volume of syrup used in the region per hectare. For the application it was calculated, using the simple three rule, the concentration of herbicide and syrup needed considering the circumference of the pot. After fifteen days, the forage species were sown at a depth of 1 cm, with five plants per pot remaining after thinning. Irrigation was performed daily, according to the water needs of the plants. At 50 days after sowing (DAS), forages were harvested and their phytoremediation potential evaluated. These were segregated in aerial part and root, being analyzed the Research, Society and Development, v. 10, n. 4, e59110414372, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i4.14372 4 parameters of fresh biomass (g), dry biomass (g) and height (cm) of the parts segregated for each treatment, according to the methodology adopted by Franco et al. (2014). For the second stage of the experiment, the species Raphanus sativus Crimson Gigante was sown in autoclaved sand, according to the manufacturer. After removal of phytoremediation plants, the bioindicator plant was transplanted for the presence of residues of the herbicide in the soil. At 5, 10, 15 and 20 after emergence (DAE) the visual phytotoxicity of the plants was evaluated, assigning scores according to the toxicity symptoms presented in the aerial part (Table 2) Symptoms greater than in the previous category, but still subject to recovery, and without expectations of a reduction in economic performance. High 5 -7 Irreversible damage, with expected reduction in economic yield. Very high 7 -10 Very severe irreversible damage, with a forecast of drastic reduction in economic performance. Note 10 for death of the plant. The data obtained were subjected to analysis of variance. The analysis of the significant effects of the period of cultivation of the phytoremediation species and the bioindicator plant was carried out by the Dunnett test, while the phytotoxicity results of the bioindicator species were analyzed by the Kruskal-Wallis test, with the coefficients of the equations being tested at 5% of probability. Results and Discussion The species were affected differently by the contaminant as evidenced by the biometric data (Table 3). The effect of the application of 2,4-D + picloram in the treatments, reduced the accumulation of fresh biomass (aerial part and root) and production of root dry matter of the species U. brizantha, when compared with the control. Among the characteristics evaluated, in the species P. maximum, only aerial biomass (fresh and dry) showed a significant effect when compared to cultivation in soil without application of the herbicide. With a characteristic to control annual or perennial dicot weeds, the biometric results of forage species show sensitivity of species when subjected to treatment with higher than the recommended dose, even though the herbicide has selectivity for grasses in general, corroborating the need for the rational use of these products. In general, the tolerance of grasses to mimic herbicides is determined by a sum of factors, in which the penetration in these plants is low with limited translocation in the phloem, due to their anatomical structures (Oliveira Júnior et al., 2011). For the characteristics of aerial and root length, fresh root biomass and dry biomass of the aerial part and root, the cultivation of the bioindicator in succession to U. brizantha provided better development when compared to the previous cultivation of P. maximum. When compared to the control treatment, less damage was observed to the morphological characteristics of R. sativus, when cultivated after U. brizantha, presenting similar behavior for the parameters plant height and accumulation of fresh and dry root biomass, indicating greater phytoremediation capacity of the species in the evaluated period. Pires et al. (2005), aiming to evaluate seven species (Cajanus cajan, Canavalia ensiformis, Dolichos lablab, Pannisetum glaucum, Estizolobium deeringianum, Estizolobium aterrimum and Lupinus albus) of plants cultivated for green fertilization in soils contaminated with tebuthiuron, using black oat as an indicator plant, observed similar behavior of the species for the characteristics of plant height and dry biomass of the shoots. The residues of the herbicide 2,4 D + picloram in the soil, caused an increase in poisoning in R. sativus throughout its cultivation, for all treatments that received the application of the herbicide, manifesting visual symptoms in the leaves (Table 5). In the test plant, the effects demonstrate the high sensitivity of R. sativus to the presence of picloram and 2,4-D and reinforces the care with the sowing of vegetables in areas with a history of herbicide applications that contain this active ingredient, indicating symptoms of intoxication at 5 days after emergency. Similar works such as those by Nascimento & Yamashita (2009), Assis et al. (2010, Madalão et al. (2016) and Galon et al. (2017), observed similar intoxication behavior in the tested vegetables. Determining the period of the intoxication effects of the herbicide compounds on the test plants, defines their persistence and qualifies their phytotoxic action, indicating the safe time for planting new crops in succession to the one that the herbicide was originally applied to. The probable cause of the effects of the herbicide 2,4-D + picloram observed in the decrease of the growth and development of the test plant, may be related to the mechanism of action in the plant and the long residual period of picloram in the soil, which induce metabolic and biochemical changes to species sensitive to the auxinic compound, causing senescence of the plants. The initial action of these compounds involves the metabolism of nucleic acids and the plasticity of the cell wall, affecting the growth of plants in a similar way to natural auxin, causing damage to the chloroplast, causing chlorosis of the leaf structure and decreasing the chlorophyll rate, leading to desiccation and tissue necrosis. In high concentrations of herbicide, growth is inhibited due to the compound reaching the meristematic regions of the plant, which accumulate both assimilates from photosynthesis and the herbicide transported via phloem. Intoxication also causes the synthesis of abscisic acid (ABA), resulting in stomatal closure, limiting the assimilation of carbon and consequently the production of biomass, in addition to contributing to the inhibition of photosynthetic enzymatic activity, resulting in leaf senescence and plant death (Hansen & Grossmann, 2000;Oliveira Júnior et al., 2011), as observed in this work. The relevance of the results in the test plant grown in sequence to U. brizantha cv. Marandu, may have an intrinsic relationship to the morphological characteristics of the species, since it has high production and root volume, providing a larger area occupied, facilitating the absorption of water and nutrients and, therefore, of the xenobiotic compounds present in the soil, in addition to exhibiting greater rusticity and adaptability inherent to edaphoclimatic conditions when compared to P. maximum cv. Mombasa (Kanno, 1999;Camarão & Souza Filho, 2005;Vilela, 2005). Conclusion The species U. brizantha, cv. Marandu and P. maximum cv. Mombasa, have phytoremediation characteristics for auxinic herbicides, with emphasis on U. brizantha, cv. Marandu. Raphanus sativus Crimson Gigante demonstrates potential as a biological indicator of the presence of the herbicide 2,4-D + picloram, presenting symptoms at 5 DAE. Its cultivation in succession to grasses, provided better conditions for the development of the vegetable garden, however it was not enough for the total removal of the xenobiotic compound from the soil. Thus, the studied period proves that it is insufficient for the total removal of the effects of the herbicide, requiring continuity in the decontamination process, since as the plants grow, their phytoremediation action is increased by the greater absorption capacity, being able to present greater efficiency in the removal of xenobiotic compounds, which consequently cause less damage to the herbicide-sensitive culture. Studies with a longer evaluation period are suggested, as well as the evaluation of U. brizantha cv species. Marandu, P. maximum cv. Mombasa in integrated cultivation system, so as to understand whether the simultaneous interaction of the species. It is also proposed to analyze nitroperchloric digestion by wet route in order to understand the mechanism(s)/process(s) active phytoreator(s) that result from the morphological characteristics of the studied species.
2021-08-02T00:06:48.233Z
2021-04-24T00:00:00.000
{ "year": 2021, "sha1": "b3a5279f73e5437be3a245444a30be4a785dbbc1", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/14372/13040", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ae0222ee65d4912de5c70f778b0b83c772e0c545", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
229372444
pes2o/s2orc
v3-fos-license
Evolvement of Peer Support Workers’ Roles in Psychiatric Hospitals: A Longitudinal Qualitative Observation Study Peer support workers (PSWs) use their experiential knowledge and specific skills to support patients in their recovery process. The aim of our study was to examine the integration and role-finding process of PSWs in adult psychiatric hospitals in Germany. We conducted open nonparticipant observations of 25 multiprofessional team meetings and 5 transregional peer support worker meetings over a period of six months. The data were analyzed using qualitative content analysis. Regarding the integration of PSWs into multiprofessional teams, we identified three subcategories: “Features of success,” “challenges” and “positioning between team and patients.” Concerning the PSWs’ roles, we developed two subcategories: “Offers” and “self-perception.” The PSWs’ specific roles within a multiprofessional mental healthcare team evolve in a process over a longer period of time. This role-finding process should be supported by a framework role description which leaves sufficient freedom for individual development. Regular opportunities for mutual exchange among PSWs can help to address specific support needs at different points in time. Introduction The concept of peer support work in mental healthcare means the involvement of people with lived experience in the treatment of people with mental health challenges. Peer support workers (PSWs) aim at promoting hope by use of their unique experiential expertise (Mahlke et al. 2017;Oborn et al. 2019;Yeung et al. 2020) and specific skills gained in their training. The approach arose among the mental health service user movement in the 1990s (Davidson et al. 2012). Since then, the development of peer support work and the implementation in the mental healthcare system has made progress in many countries due to changes in healthcare policies towards patient-centeredness. These changes were led mainly by Anglo-Saxon nations, such as England, Wales, Scotland, New Zealand, Australia, Canada and some states in the USA (Shepherd et al. 2008). Since 2005, PSWs have been educated according to the "Experienced Involvement" curriculum in Germany and other European countries, which is based on the so-called trialog movement (Amering 2010). The one-year education program is currently offered at more than 30 locations in Germany (EX-IN Deutschland 2020), which leads to the fact that more and more PSWs are seeking employment in mental healthcare institutions. There is still a lack of research about the integration and role-finding process of PSWs in psychiatric hospitals as peer support work is a comparatively new profession, at least in Germany. The Regional Association of Westphalia-Lippe (LWL) initiated the project "Employment and payment of educated PSWs in the LWL Psychiatry Network" in 2011 within the framework of the implementation of the United Nations Convention on the Rights of Persons with Disabilities. The goal of the initiative was to promote the employment of Alexa Nossek and Anna Werning share first authorship. PSWs in adult psychiatric hospitals of the LWL Psychiatry Network. Until that time, the adult psychiatric hospitals participating in this study only had a few experiences with and rare knowledge about peer support work. Against this background, our research aimed to explore the integration and role-finding process of PSWs into mental healthcare teams in psychiatric hospitals and identify promoting factors. Furthermore, we aimed to clarify how PSWs perceive their roles and how the latter evolve over time. Up to now, various international studies have already explored the topic of the implementation of peer support work in psychiatry (Chinman et al. 2010;Hamilton et al. 2015;Kent 2019;Siantz et al. 2016). Among these, there were both quantitative and qualitative empirical studies. In most cases, the latter were based on the evaluation of qualitative interviews and focus groups. To the best of our knowledge, our study is the only one which gathers knowledge by open nonparticipant observation of two settings in adult psychiatric hospitals over a longer period of time. Our longitudinal qualitative observation study was part of a larger project which consisted of three parts (Gather et al. 2019): (1) the observation study presented in this article, (2) qualitative interviews with PSWs, and (3) focus groups with PSWs and mental health professionals. Results from the interviews and focus groups have already been published elsewhere (Otte et al. 2020a(Otte et al. , 2020b. Nonparticipant Observations We conducted open nonparticipant observations of 25 multiprofessional team meetings (TMs) including one PSW each in wards of three adult psychiatric hospitals of the LWL Psychiatry Network in North Rhine-Westphalia, Germany, from April to October 2016. Additionally, we observed five transregional PSW meetings (PSWMs), i.e. meetings in which the PSWs of the different psychiatric hospitals gathered and discussed their experiences among themselves. Three further PSWs working in two other LWL adult psychiatric hospitals took part in these meetings (see Table 1). The study has been approved by the Research Ethics Committee of the Medical Faculty of the Ruhr University Bochum . The observations were performed by two researchers with different professional backgrounds (A.N.: Philosophy with a particular focus on medical ethics; A.W.: Molecular biomedicine and peer support work). They were written down on-site in handwritten notes which were then used for immediate documentation using a word-processing program. While creating the documentation, we anonymized all data to make sure that no person observed could be identified. We structured each observation following an observation protocol. We included general data (date, time, people present) and a description of the setting (e.g. the rooms, the positioning of people and the atmosphere) in each transcript of a meeting observed. We depicted the social interactions observed as descriptively as possible. Initial interpretations and thoughts were noted in a separate way to distinguish them clearly from the social practice observed directly. All quotations in this paper are extracts from the observation transcripts. We inevitably came into contact with patient information while observing TMs in wards of psychiatric hospitals. We never documented any patient data since this information was not relevant for our research. Nevertheless, before the meeting took place, all patients who were supposed to be discussed on this TM were informed and asked for their consent. When patients refused or could not be asked, the observers left the room during the discussion of their cases. Data Analysis We analyzed the data in line with the basic principles of qualitative content analysis (Mayring 2014) using MAX-QDA (MAXQDA 12 Standard Portable, VERBI Software GmbH, Berlin, Germany), a software for qualitative data management. At first, each setting was treated separately in the analysis. Both first authors analyzed and coded the observation transcripts independently using an open-coding process. They discussed their interpretations and conceptualizations of the data with the other members of the research group, each having a different disciplinary background (sociology, psychiatry, medical ethics, philosophy) until a shared understanding was reached. After that, initial categories capturing the core themes of the TMs and transregional PSWMs observed were developed and the codes were grouped accordingly by A.N. and A.W. The categories were discussed within the research team and then further developed to more central categories. Results There were significant differences between the two settings observed leading to diverging results we could gather in each setting. During the TMs, we were able to observe the direct communication, interaction and behavior of the team including the PSW. Thus, the TMs were the setting most suitable for findings concerning the central category "integration into the team." In addition, observation of the transregional PSWMs offered the opportunity to learn about the thoughts, feelings and variety of challenges the PSWs faced during their job routine, as those meetings were used predominantly as a space for mutual exchange among the PSWs and had the character of a peer consultation. The central category "PSWs' roles" was developed mainly out of this setting. All categories, major subcategories and examples are listed in Table 2. Features of Success Although there are factual inequalities between PSWs and other mental health professionals regarding pay and power, we did not observe any kind of inequality between PSWs and other team members during their communication in the multiprofessional TMs. The respective senior psychiatrist or head nurses appeared to appreciate the contributions of PSWs in general. Senior psychiatrist to PSW5: "You wanted to say something? You inhaled!" (TM5 2). 1 The PSWs' contributions had the same (high or low) significance as those of other team members. Whether or not a PSW contributed often appeared to be connected, first and foremost, with the individual personality of the specific PSW. The PSWs were on a first-name basis with other team members, such as psychiatrists and nurses, and included in the informal chatting (e.g. jokes or small talk) before and after a TM. The PSWs provided a lot of relevant information about patients during the observation period. The senior psychiatrist and the psychologist talk about a patient. They comment on the patient's lack of understanding of their condition and the deficits in behavior to improve their situation. PSW3: "But they came to me for information about self-help groups!". Senior psychiatrist (astonished): "Really?" (TM3 3). PSW2: "I can give a lot of helpful additional information, things the nurses don't even notice." (PSWM 2). On the other hand, team members asked PSWs to talk to specific patients, include them in a group or accompany them while doing certain tasks. The psychologist of the ward was sure that one of their patients would benefit from talking to PSW1 and asked her to initiate the contact: Psychologist to PSW1: "I'd like you to start with patient X immediately!" (TM1 3). The PSWs felt accepted when there was a contact person and this person had time for conversations. Additionally, it was relevant that there was ongoing communication with superiors and colleagues to clarify mutual expectations and needs and, moreover, to provide feedback. PSW1: "I've got positive feedback about my presence in groups, because I bring in the patients' perspective." PSW2: "Yes, it was the same for me." (PSWM 1). PSW2 stated that she received official feedback after four weeks of working in the hospital. PSW2 "liked that a lot" (PSWM 1). Challenges We could observe good contact between the PSWs and their contact people in the TMs; however, the relationship was sometimes challenged by a full work schedule. The PSWs also used the PSWMs to talk about their contact people on the ward: On the sidelines of the observations, one PSW told us directly about his problems with the lack of access to resources: PSW2: "I have suggested the discussion with contact person X. I am tired of the fact that I still haven't got a staff mail account. Things aren't running smoothly for me yet!" (TM2 1). Positioning Between Team and Patients The PSWs expressed their challenges in the PSWMs and the TMs regarding positioning themselves between team and patients. The topics were mainly passing on important patient information, such as suicidal ideation, or conveying personal criticism to the staff. The PSWs experienced themselves not only as part of the team but also relatively close to the patients. This posed an intricacy to the PSWs: PSW5: "I have a problem personally of conveying criticism to the team. I have to improve [i.e. working on finding the courage to criticize the team]." PSW1: "You haven't been present. Maybe you can support the patient better by helping them to report their criticism themselves." (PSWM 3). In accordance with the proximity to the patients' perspective, the PSWs mainly appreciated the case that patients address them informally: PSW1: "You are well received while making the rounds when the patient forgets to address you formally; I view that as a compliment." (PSWM 1). Most of the PSWs did not mind being addressed informally but wanted to emphasize that they are members of the team by still addressing patients formally. PSW4, for instance, regards it as uncomfortable to address patients informally: "The patients can address me informally, but I keep it formal myself." (PSWM 3). Offers The PSWs performed a variety of tasks and provided support for patients on different levels during the observation period. The PSWs generally provided one-on-one conversations with patients, attended TMs, took part in ward rounds when asked by patients and did informal tasks, such as attending supper, aiding patients with their kitchen duties and joining patients in going for a walk or running errands. Moreover, providing a bit of normality by offering small talk and just being there, being available for patients was one of the most relevant tasks. Even though the respective activities persisted during the whole observation period, the PSWs talked about their everyday attending offers especially in the beginning. PSW5: "With one patient I talked about the weather, with the other about her hairdo … but you are available and helpful for their well-being and overall atmosphere." PSW1: "When I was a patient, I enjoyed small talk and the distraction caused by it." PSW5: "I can give normality, time. Sometimes I just go and sit down with a woman who is knitting." (PSWM 1). Additionally, the PSWs were involved in the followup care of discharged patients. The PSWs talked about options for follow-up care repeatedly during the PSWMs. The group, for instance, was very interested in hearing about PSW2, who encouraged the patients to write letters including wishes and goals addressed to themselves, collected them and sent those letters to the patients subsequently an agreed time after discharge (PSWM 5). Another big part of the work was to develop and host therapeutic groups, such as recovery groups, social skills training or creative writing groups. Regarding such groups, the PSWs worked partly together with other staff members. PSW1: "I regard the leading of groups as one of my strong suits." (PSWM 1). Self-perception The PSWs often expressed the comprehension of their role figuratively, which becomes apparent in several terms and metaphors they used in the PSWMs: PSW5: "I see myself as a 'source of hope'." (PSWM 1). PSW3: "Patients want me as an 'advocate'." (PSWM 1). PSW6: "My colleagues often call me a 'pioneer'." (PSWM 3). PSW1: "I get involved when, for example, they want to change therapists or when they don't want to be discharged yet, when the patients have the feeling that 'I am fighting a losing battle.' But the patients manage it autonomously, presenting their concerns; I am the 'backup'." (PSWM 3). However, the beginning of the practical role-finding process was difficult for the PSWs: PSW5: "The head nurse says: 'No, you have to have the suggestions, the ideas!' I would have liked clear instructions sometimes." PSW4: "However, in retrospect, I appreciate these liberties, that nothing was imposed on me. Nevertheless, in the beginning, it [the leeway] was pressure and insecurity." (PSWM 2). The PSWs often stated in the PSWMs how important those meetings are for exchange, mutual confirmation and, therefore, role-finding: PSW1: "I feel like I'm 'floating' [i.e. feeling utterly insecure]; I am glad that we have the group." (PSWM 1). After the first three PSWMs, the PSWs had the feeling that they would benefit from a trained external supervisor. Therefore, beginning with PSWM 4, each PSWM included a timely limited session with an external supervisor. Respecting the PSWs' wishes, we did not observe these parts of the PSWMs 4 and 5. PSW1 enumerates the positive features: "[…] satisfaction with the supervisor, this should be continued. And further peer support worker meetings." (PSWM 5). The feared or actual problem of being not able to find access to patients was a recurring issue in the PSWMs. PSW1: "During lunchtime you encouraged me. In the current phase, when the patients apparently don't need me so much, I feel superfluous …" (PSWM 1). Moreover, the PSWs discussed the fact that, considering the number of patients on a ward, it is not possible to have a close contact to everybody. PSW5, for instance, stated that they have learned that "it is not possible to know everyone [i.e. every patient]" (PSWM 1). In general, we observed that the topics changed in the PSWMs during the observation period. At the beginning, the PSWs talked mainly about their everyday attending offers and their role-finding process. Later, both in the PSWMs and TMs, the focus was more on group offers whereat the PSWs showed assertiveness: The team talks about what to do best to help a patient become more active. PSW5: "I also integrated the patient into my recovery group." The other team members murmur approvingly. (TM5 1). PSW4: "I speak quite frankly in the recovery group, personal stories as well; this goes down well with the patients." (PSWM 5). PSW6: "One last thing, do you have a folder for the patients to take away?". PSW4: "I give them worksheets." PSW1: "That would be a good idea for the recovery group! Making one's own folder and designing it beautifully!" (PSWM 5). Discussion Our observations revealed that the members of the multiprofessional mental healthcare teams in the psychiatric hospitals appreciated the PSWs' statements during the TMs. Psychiatrists and psychologists assigned PSWs to specific patients and asked them to include patients in their group offers. This suggests that the mental health professionals understood the specific skills and knowledge of PSWs and were inclined to use the PSWs' special expertise for the benefit of the patients. At the beginning of their integration process, the PSWs were concerned about the lack of clarity of their role. This phenomenon of vague roles is discussed in several international publications (Crane et al. 2016;Gates and Akabas 2007;Gillard et al. 2014;Hurley et al. 2018;Ibrahim et al. 2020;Jenkins et al. 2018;Mahlke et al. 2014;Mancini 2018;Miyamoto and Sono 2012;Mowbray et al. 1996;Simpson et al. 2018;Tse et al. 2017;Vandewalle et al. 2016). A qualitative study in the USA (Cabral et al. 2014) and two qualitative studies in the UK (Gillard et al. 2013(Gillard et al. , 2015a suggest that this unclarity is a hindrance to the integration of PSWs. However, our observations revealed that PSWs became more secure in their own understanding of their role over time. The vagueness which initially caused insecurity and pressure was appreciated as freedom to design one's own role definition later on. Asad and Chreim (2016) come to a similar conclusion by stating that some ambiguity can be seen as a benefit for PSWs. The question how much clarification or even standardization of the role is necessary, on the one hand, and how much freedom for individual role-finding is required, on the other hand, is a topic in several publications. Moll et al. (2009) discuss the positive implications of a variable and evolving role and the need for some clarity for PSWs and staff. Heumann et al. (2019), who discuss this issue in detail, are aware of the downside of a fixed role description but still opt for a clear definition of the role. Gruhl et al. (2016), however, conclude that any training or standardization of the role should be based on the authenticity of PSWs, which is to be seen as the core of peer support work. Additionally, the fact that peer support work is a relatively new profession and, therefore, PSWs inevitably lack a basic common identity so far should be considered. Hurley et al. (2018) point this out by comparing the current role-finding of PSWs to the role formation process of mental health nurses. Based on the data gathered during our observations, we doubt that the PSWs' different tasks and offers can be grasped in a completely fixed role description in advance. By limiting choices, a fixed role description would undermine the specific way of support PSWs are able to offer. However, a kind of framework role description should be issued in advance to give some guidance to the PSWs. Such a framework gives other mental health professionals an idea about the role of their new colleagues and can, thus, avoid problems during the integration process (Asad and Chreim 2016). This view is in accordance with our own findings in the qualitative interview and focus group parts of our study (Otte et al. 2020a(Otte et al. , 2020b and with the findings of Collins et al. (2016). The latter interviewed psychiatrists about their attitudes towards PSWs in the UK and stated that, given the influence of psychiatrists on other team members, a lack of knowledge about the role of PSWs can pose a danger to their integration. Gillard et al. (2015b) suggest the local creation of role understandings within a certain mental healthcare team to promote integration. In their view, such an approach can help to reduce the conflict between role specifications that are either too narrow or too vague. Our findings support this idea. All PSWs who participated in our study developed their roles on a local level in relation to the needs of the patients and non-peer staff on their specific wards. Furthermore, our findings suggest that PSWs benefit from the mutual exchange in transregional PSWMs during their role-finding process. This is in line with the results of our interview study, in which PSWs stated that they perceived these transregional meetings, which can be understood as a peer support for PSWs, as especially helpful (Otte et al. 2020b). Potential strains on PSWs and the need for support on various levels is addressed in several publications (Ahmed et al. 2015;Byrne et al. 2019;Cabral et al. 2014;Davidson et al. 2012;Nestor and Galletly 2008;Simpson et al. 2014). There is evidence that the exchange of experiences and the mutual support is important when implementing peer support in mental healthcare. Davidson et al. (2012), for instance, suggest hiring at least two PSWs for any ward or team, respectively. Our findings support Davidson's claim. In our study, PSWs were not employed in tandem, however, they experienced it as relieving to know that they are not alone and that other PSWs have similar questions or problems. Additionally, they could give each other advice if needed. The fact that PSWMs generate a safe space in which PSWs can talk openly without having the fear that they could unintentionally offend other mental health professionals seems very important. Moreover, our findings suggest that regular communication with mentors on the wards and other team members is both helpful for the development of a professional role of PSWs and crucial for their integration into the multiprofessional team. These results are in line with Debyser et al. (2019), who recommend that care providers should establish a support framework for PSWs in collaboration with the PSWs themselves. Davidson et al. (2012) further suggest giving an administrator the role of a "peer staff 'champion'" (p. 127), who is supposed to intervene when problems on an organizational level occur. We also think that-regardless of what it is called or how concretely it is organized on a local level-the importance of strong communication and support structures for PSWs and other team members cannot be stressed enough. Strengths and Limitations To the best of our knowledge, our study is the first longitudinal qualitative observation study which gathers knowledge on the integration and role-finding process of PSWs in adult psychiatric hospitals in Germany. Observational studies are rare, which might, at least partially, have to do with the time and effort they need. Observations can give insights into certain routine actions which often cannot be obtained by other qualitative methods as they are unlikely to be mentioned in an interview or a focus group (Harvey 2018). Moreover, observations show the social context in which people communicate and act (Salmon 2015). Another advantage of observational studies is that they enable the researcher to circumvent the social desirability bias by showing not only what people say but also how they act (Mays and Pope 1995). Although people may attempt to present themselves in the best light, it is not possible to do so over a longer period of time, especially once they get used to the presence of an observer (Mulhall 2003). In addition to these strengths, there are also limitations which are specific to an observation study. An observation changes the situation observed in two ways. Firstly, any observation influences the social behavior observed. Such influence is unavoidable. Therefore, we tried to limit our impact by asking all the participants to act as they usually do and by acting with particular reserve. Secondly, any observation is shaped by the perception of the observer (Mulhall 2003;Salmon 2015). Therefore, it is necessary to reflect thoroughly and to discuss initial interpretations within the research team. Furthermore, we added remarks about potential subjective factors which might influence the perception of the practices observed during the analysis. The different professional backgrounds of the observers and research group members were also helpful and important to avoid methodical biases, such as over-identification. Furthermore, qualitative-empirical research has some general limitations. Our study is based on a limited number of observations in a specific type of adult psychiatric hospital in Germany. Considering the differences between hospitals on an organizational level, especially in an international context, and the influence of the individual personalities of the people involved, it is not possible to generalize our results. Conclusion Integration of PSWs into multiprofessional mental healthcare teams in adult psychiatric hospitals is possible and mental health professionals appreciate the special expertise of PSWs. The PSWs' specific roles evolve in a process over a longer period of time and aligned with the specific needs on a local level. This role-finding process should be supported by a framework role description, on the one hand, and sufficient freedom for individual development, on the other hand. Regular opportunities for mutual exchange, such as the offer of "peer support for PSWs" at transregional PSWMs, can help to promote the evolvement of roles and to address specific support needs at different points in time. Informed Consent Written informed consent was obtained from all individual participants included in the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-12-25T14:11:09.448Z
2020-12-24T00:00:00.000
{ "year": 2020, "sha1": "35860ba2cabf6c8b24dc345b005c1dc73163ebc5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10597-020-00741-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "35860ba2cabf6c8b24dc345b005c1dc73163ebc5", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248106553
pes2o/s2orc
v3-fos-license
Cardiovascular Fitness Assessment through 3 Minutes Step Test in Adults of Lahore during COVID-19 Pandemic At the end of the year, 2019 world witnessed a disease, which is still affecting the world, this disease was named Corona Virus Disease -19 (COVID-19). It is a highly infectious disease that causes severe acute respiratory syndrome. Objective: To find out the impact of COVID-19 on the cardiac fitness of young and middle-aged adults. Methods: A cross-sectional study was conducted at ON-Campus Physiotherapy Clinic at University of Management and Technology, Lahore. Convenient sampling was used. The sample size was 437. Healthy participants from both genders aging 17-45 years were recruited in the study. A self-designed questionnaire validated through a pilot study was used to record the data. Three minutes step test was performed and pre and post-test Cardiac rate were recorded. IBM SPSS Statistics for Windows was used to record and analyze all data. Results: Results showed that the female participants were 271 (59.7%) while males were 176(41.3%), young adults proportion was 76% while middle-aged adults was 24%.The overall results of the post-test 3-minutes step test show that a majority of the population30.7 % (n=134) had excellent cardiac rate, a good proportion of the sample had the same value for Good and above-average cardiac rates (f=22.4 %, n= 98) while fewer number participants fall in rest of the categories such as average, below average, poor and very poor ( 12. %, 7.1%, 3.0 %, 2.3 %) respectively. Conclusions: The study concluded that the overall cardiac capacity of young and older adults is not affected by the pandemic but the females have a better cardiac condition as compared to men. __________________________________________________________________________________________ INTRODUCTION At the end of the year 2019, the world witnessed a disease, which is still affecting the world, this disease was named Corona Virus Disease -19 (COVID-19) [1]. Coronavirus disease is a highly infectious disease that causes severe acute respiratory syndrome [2]. This disease affects in such a way that it affects the human respiratory system and also lowers the immune system which leads to an increase in the level of underlying medical conditions and increases the chances of other infections as well. This whole process combined in a cyclic manner leads to systemic failure and death [3,4]. Multiple factors are involved in the severity of the disease, this varies from asymptomatic to mild and moderate and can lead to death [5]. The speed of spread was too high that in March 2020 WHO declared this a pandemic. This virus can spread through close contact, sneezing, coughing, and talking [6]. COVID-19 is extremely virulent and because of this many preventive and containment measures were taken including the use of hand sanitizers, gloves, protective eyewear, cancellation of air travel, sports activities, social gatherings, closures of schools, universities, business activities, and the lockdown was also imposed [7,8]. Lockdown is still known best procedure to reduce the risk of spread of COVID-19 [9]. As lockdown was imposed across the world it reduces all types of activities, this resulted in potential adverse effects on the well-being of humans which include psychological as well as physical health. This lockdown also brought a sedentary lifestyle [10]. A sedentary lifestyle, prolonged sitting, and reduced physical activity decrease cardiac and aerobic capacity [11][12][13] as well as increases the risk for other diseases such as obesity diabetes, and cardiovascular diseases [14,15]. Physically active persons have good cardiac/ aerobic fattiness as compared to non-active persons [16][17][18]. Many studies were conducted to find out the impact of COVID-19 on all professionals and general populations as well, majority of studies were on mental health, stress, anxiety but we did not find any study which shows the effects of a sedentary lifestyle on cardiac capacity due to lockdown and other restrictions imposed by the government and other legislative bodies, that why we started to study these effects. METHODS A single-centered, cross-sectional study was conducted at ON-Campus Physiotherapy Clinic at University of Management and Technology, Lahore. Using Convenient Sampling Technique; Young and Middle-aged Healthy participants with ages between 17 and 45 years of both genders were divided into two groups, young adults aging 17-30 years & middle-aged adults 31-45 years [19] having no known Cardiac, Respiratory, and Musculoskeletal and Systemic disease were requested to participate in the study. A total of 550 participants agreed to participate in the study and a written consent form in either English or Urdu was taken from the participants. Participants were assessed to determine any kind of risk factors that may limit a person's ability to exercise. For this purpose, Physical Activity Readiness Questionnaire (PAR-Q) was filled by the participants [20]. The participant answering "YES" for any question in the questionnaire was excluded from the study. So, data from a total of 437 participants was used for statistical analysis. A self-designed questionnaire validated through pilot study was filled out from 437 participants. It includes demographic data and pre and post 3 minutes step test values. The Procedure was verbally and practically demonstrated to the participants. They were asked to sit comfortably in a chair in a quiet, Temperature and Humidity controlled room. The heart rate and Pulse rate of the participants were measured for exactly one minute. A metronome mobile application was used to indicate the steps for the test; the frequency was set to 96 BPM. Stepping rate was 24 cycles (1cycle = 4clicks). Box used had a height of 30cm. After completing the test, participants were advised to sit on the chair, remain still and participants' heart rate is measured for 1 minute [21]. All the data was recorded and analyzed through IBM SPSS. Measurements were compared between two groups of young and older adults by using the independent sample t-test for the normal data and Mann-Whitney test for the skewed data. Whereas the relationship between age, and gender with 3-minute step tests indices was investigated by using multivariate regression analysis. A pvalue of less than 0.05 was used to designate the statistical significance of all analyses. Ethical approval was granted by the Office of Innovation and Research, University of Management and Technology, Lahore (UMT-008/009-2021), Pakistan. RESULTS For the age group, frequency of young adults was 76% while for the older adults it was just only 24%. It also revealed that post-test aerobic capacity is higher 91.02+16.08(SD) than the resting cardiac rate 73.07+10.64 (SD). The overall results of post-test 3-minutes step test show that the majority of the population 30.7 % (n=134) had excellent cardiac rate, a good proportion of sample had same value for Good and above-average cardiac rates (f=22.4 %, n= 98) while less number participants fall in rest of the categories such as average, below average, poor and very poor ( 12. %, 7.1%, 3.0 %, 2.3 %) respectively. Characteristics /Variables Mean Notes: Normal data were given as the mean ± SD and skewed data were given as medians (interquartile ranges). A level of significance value was set (p< 0.05) and data were analyzed using an independent t-test for normal data; b= Mann-Whitney Test for skewed data. In table 3 the categorical variables like age groups and the 3-minute step test were compared among males and females using the Pearson Chi-Square test and proportion values are demonstrated. Not surprisingly, in the age groups also, women's number and frequency is significantly (0.034) higher than the men. The overall result of all parameters of three minutes step test is significant (p=0.023). Notes: The data were demonstrated as the frequencies and numbers f (n). These frequencies were compared and analyzed between different genders using a: Chi-Square Test. A level of significance value was set (p < 0.05). DISCUSSION Aerobic capacity is one of the most important factors through which we can check or determine any person's cardiac health, whether he is a normal person, athlete, or diseased person [22]. It is well-established norm we can predict the cardiovascular health level through aerobic capacity of that person. Poor aerobic capacity causes deteriorated cardiovascular health and vice versa [23]. Many studies used 3-minutes steps, along with other methods to measure and check the aerobic capacity and strength of the cardiovascular system [24]. Step tests help us to predict risk factors of heart health which are normal on resting even with the chronic condition, the intensity of impairment associated with disability, and also the prognosis [25]. The study was conducted on 400 plus subjects as per objectives, where we found that majority of participants were females, and less number were males. The mean age of participants was 27.06 + 6.96 and they were divided into two groups of young and middle-aged adults, major proportion was of the young group as compared to other. In the present study Mean heart rate in resting as well as post-test is 73.07+10.64 & 91.02 ± 16.08 respectively. Our study agrees with the findings of another study that measured the aerobic capacity and heart rate in young adults aging from 17-24 years at Jinnah Medical & Dental College Karachi. The results are measured and compared with ours, especially the cardiac resting rate which was 77.53±9.81 [26]. In a study, we found that patients have high cardiac rates in both genders as compared to current study, which may be due to higher mean age and also associated with chronic conditions [27]. Another study was conducted to predict the cardio-respiratory fitness in the geriatric population through the step test, they also check the heart rate which is also higher as compared to our study [28]. In 2015 a similar study was conducted on medical students in India which indicates that male students were more fit as compared to female, this feature of their study is opposite to ours [12]. The current study measured the heart rate in both genders in resting position which is 72.91±11.40 in females & 73.31±9.43 in males. Another study supports to this finding; conducted at Andrews University students, the resting heart rate in the male students was 73.1 ± 13.5 that are similar to our study [29]. In our study, we found that majority of our participants were having excellent & good cardiac conditions while comparatively fewer numbers on average, a very low number in poor category. Along with these statistics, in our study females have an upper hand regarding their aerobic fitness as compared to their counterparts. In a similar study, they checked the cardiac performance of young adults, and the results are contradicted to ours, they found a very less percentage of excellent category while the majority fall in good and average. Females were less fit as compared to men this also does not support our results. The reason behind this conflict of results is might due to a smaller number of participants in another study as well as the mean age which is 19.49±1.62 years as compared to ours which is 27.06 + 6.96 years [30]. There are a few limitations in our study, the majority of participants were young adults. A study with BMI calculation could provide us with a better picture. Physical activity level, working or study hours, level of education, and job status may provide us with a clearer scenario to us. CONCLUSIONS It is concluded that the cardiac capacity of young and older adults is not affected by the pandemic. Females have better cardiac conditions as compared to men which suggest that females are doing their routine work in the home while males have lack of physical activities due to work from home mode.
2022-04-13T15:13:07.342Z
2021-12-31T00:00:00.000
{ "year": 2021, "sha1": "fd13509616bd80d98aa6ba79f7b076b876f8b954", "oa_license": "CCBY", "oa_url": "https://thetherapist.com.pk/index.php/tt/article/download/28/30", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7d07e091f409383dcc61b89f5f3229d425a920ce", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [] }
34351947
pes2o/s2orc
v3-fos-license
BioXTAS RAW: improvements to a free open-source program for small-angle X-ray scattering data reduction and analysis BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data, including size-exclusion chromatography coupled SAXS data. The software is designed for biological data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and more advanced analyses such as calculation of inverse Fourier transforms. Introduction Small-angle X-ray solution scattering (SAXS) is a popular structural technique for studying biological macromolecules. SAXS can provide information about the solution state of macromolecules and complexes, including, but not limited to, size, molecular weight, oligomeric state, flexibility, foldedness and overall shape Jacques & Trewhella, 2010;Blanchet & Svergun, 2013;Graewert & Svergun, 2013;Petoukhov & Svergun, 2013;Vestergaard & Sayers, 2014;Chaudhuri, 2015). Closely tied with the increasing popularity of biological SAXS is the increasing number of computer programs to analyze the data. SAXS data are primarily acquired on two-dimensional area detectors and then reduced to one-dimensional profiles of scattered intensity versus scattering vector magnitude q [q = 4 sin ()/, where 2 is the scattering angle and is the X-ray wavelength] (Blanchet & Svergun, 2013;Pauw, 2013;Petoukhov & Svergun, 2013;Dyer et al., 2014;Skou et al., 2014). Many programs exist for performing this reduction: standalone programs or libraries such as FIT2D, pyFAI and DPDAK (Hammersley et al., 1996;Benecke et al., 2014;Ashiotis et al., 2015;Hammersley, 2016); custom solutions for a particular synchrotron beamline, such as the version of Blu-ICE (McPhillips et al., 2002) used at BL 4-2 at the SSRL, or home source company, such as the SAXSLab software from Rigaku; and, increasingly, data processing pipelines at synchrotron beamlines, such as those at P12 at Petra III and BM29 at the ESRF Blanchet et al., 2015;Brennich et al., 2016). Once obtained, macromolecular scattering profiles are typically averaged and background subtracted, then basic analyses, such as a Guinier fit and calculation of molecular weight, are carried out. Except for FIT2D, pyFAI and DPDAK, all of the software listed above has these capabilities. Other common software for these operations are ScÅ tter, Primus and the ATSAS utilities (Rambo, 2017;Konarev et al., 2003;Petoukhov et al., , 2012Franke et al., 2017). After subtraction and basic quality checks, the choices for further analysis depend on the question(s) being investigated by the scientist. Many of the programs already mentioned have advanced analysis capabilities, as do others such as GENFIT, the Integrative Modelling Platform (IMP) SAXS software, SASSIE, DENFERT, SASTBX, MEMPROT (Curtis et al., 2012;Liu et al., 2012;Koutsioubas & Pé rez, 2013;Spinozzi et al., 2014;Pé rez & Koutsioubas, 2015;Koutsioubas et al., 2016;Perkins et al., 2016;Schneidman-Duhovny et al., 2016) and more (see, for example, http://smallangle.org/content/software). There are also programs specifically for analysis of size exclusion chromatography coupled SAXS (SEC-SAXS) data: DATASW, DELA and the US-SOMO HPLC-SAXS module (Malaby et al., 2015;Shkumatov & Strelkov, 2015;. RAW is unique in that it (i) calibrates, masks and integrates images, (ii) can carry out basic data processing such as averaging, subtraction, Guinier fits and molecular weight calculation, (iii) incorporates advanced processing such as calculation of inverse Fourier transforms and envelopes, (iv) provides basic and advanced SEC-SAXS analysis capabilities, (v) can be used both at a beamline and by scientists on their personal machines, (vi) is open source, (vii) is free to all, and (viii) is available on all major OS platforms. The closest comparison among currently available software is to Primus and ScÅ tter, both of which have features (ii), (iii), (v) and (viii), while ScÅ tter additionally has feature (vi). Each of the three programs, Primus, ScÅ tter and RAW, has features that are unique. The synchrotron beamline pipelines mentioned above also provide many of these features [at least (i)-(iii)], but cannot easily be used on scientists' personal machines or at other beamlines. RAW is ideal for use both at a beamline and by scientists at home for initial data processing, validation and modeling. For these reasons, RAW is already used at several beamlines around the world (Acerbo et al., 2015;Li et al., 2016) and at numerous home sources. This paper describes the new or updated features of RAW since the initial publication about the software (Nielsen et al., 2009). These new and updated features include (i) improvements to the automatic centering and calibration routines, (ii) support for new detector types and new integrated (onedimensional) data formats such as .dat [the standard extension for ASCII-formatted SAXS files with three-column data of q, I(q) and ÁI(q)], (iii) calculation of Guinier fits manually and automatically, (iv) calculation of molecular weight via four different methods, (v) control of GNOM, AMBIMETER, DAMMIF, DAMAVER and DAMCLUST (Svergun, 1992;Franke & Svergun, 2009;Petoukhov et al., 2012;Petoukhov & Svergun, 2015;Franke et al., 2017) from RAW, (vi) loading of SEC-SAXS data sets as intensity versus frame number curves and calculation of radius of gyration (R g ) and molecular weight across peaks, (vii) singular value decomposition for analyzing the number of significant components in a data set, (viii) evolving factor analysis for model-free deconvolution of data based on the work of Meisburger et al. (2016), (ix) tracking of processing history and analysis, (x) availability on all major operating systems (Windows, Mac OSX, and Linux), and (xi) availability of a written tutorial and manual, and tutorial videos online. Program overview RAW is a graphical user interface (GUI)-based free opensource program for reducing and analyzing biological smallangle X-ray scattering data. It is a Python-based program released under the GNU General Public License (GPL), with a few routines written in C++ for speed. Pre-built installers, source code, installation instructions, tutorials, manuals, version change logs and support are all available from the project web site: https://sourceforge.net/projects/bioxtasraw/. It has been designed as a quick to learn, easy to use program that allows users to process data from images to envelopes, and has been improved on the basis of feedback obtained from users at beamlines and in their home laboratories over the course of $8 years. It contains significant standalone data reduction and analysis features while also serving as a front end for some existing analysis software from the ATSAS (Franke et al., 2017) package (which is closed source, and is free for academic but not industrial users). This article describes version 1.2.1 of RAW only; features may differ in other versions. 3. Working with images and scattering profiles 3.1. Loading and processing images Typical SAXS image processing has been described in detail elsewhere (e.g. Pauw, 2013). In brief, it usually proceeds as follows: collect image; apply any necessary corrections to the image, for example dark current subtraction; mask image to remove artifacts, for example detector panel gaps and beamstop shadow; radially average image into a scattering profile of intensity versus q using known calibration values (beam center, sample-detector distance, X-ray energy), also called integration; apply any necessary corrections to the scattering profile, for example the solid angle correction; normalize the data by incident intensity or other equivalent value; and possibly apply additional normalization/correction factors, for example putting the data on an absolute scale. RAW supports all these operations. RAW supports 27 different detector image formats using both the FabIO library (Knudsen et al., 2013) and customwritten functions implemented independently from that library, including reading in the image header for use in processing. The FabIO library supports many major image formats including CBF, HDF5 and Pilatus Tiff, while RAW computer programs additionally supports MPA (multiwire), SAXSLab300, FReLoN, FLICAM, ILL SANS D11 and Medoptics image formats. It also calls the pyFAI library (Ashiotis et al., 2015) to automatically calculate sample-detector distance and beam center position using a known calibrant. Masking and integration are carried out as previously described (Nielsen et al., 2009). Scattering profile normalization can now use general mathematical expressions referencing information in the image header or in an external header file. Many beamlines write separate header files containing information necessary for image processing, such as ion chamber or beamstop diode counts. The format of these files is beamline specific. Currently RAW can read in separate header files from CHESS beamlines G1 and F2, MaxLab beamlines I711 and I911-4, APS beamline 18-ID (BioCAT), and SSRF beamline BL19U2. New formats are added by writing a parse function that takes a filename as input and returns a dictionary of header values. Users can also set RAW to calculate and apply an absolute scale factor based on measured standards. All of the image processing settings, as well as settings for advanced processing, can be saved as a configuration file and loaded before image processing. Once set, these parameters are automatically applied to any images loaded into RAW, either manually or in the online mode, allowing easy processing of large numbers of images. The Quick Reduce feature of RAW can be used to process images into scattering profiles without having to load them into RAW, further speeding up the process. Basics of working with scattering profiles RAW can create scattering profiles from images and load profiles in .csv, .dat, .fit, .fir, .int, .rad and .sub format from many beamlines and programs. Scattering profiles created by RAW are saved as three-column (q, intensity and error in intensity) space-separated .dat files with a footer that contains analysis and processing information in JSON format. These saved files are compatible with other standard programs such as the ATSAS software and ScÅ tter. Scattering profiles loaded into RAW are plotted and can be easily manipulated. Users may select Lin-Lin, Lin-Log, Log-Lin, Log-Log, Kratky, Guinier or Porod plot axes. Scattering profiles can be averaged, subtracted, merged, rebinned, interpolated, scaled, offset, and truncated at both low and high q. The main scattering profile manipulation interface and plot are shown in Fig. S1 of the supporting information. All of these features are accessible in the manipulation panel, through right-click context menus, and/or through the View and Tools menus in the top menu bar. Guinier fit Users can carry out a Guinier fit on a scattering profile, fitting a straight line to logðIÞ versus q 2 data, using the Guinier fit window (Fig. S2). The window displays the radius of gyration (R g ) and scattering intensity at zero angle [I(0)] obtained from the fit, along with q min R g , q max R g and the r 2 value of the fit. When this window is initially opened, RAW automatically attempts to find an optimal fit range. It does this using a heuristic approach that starts by selecting a set of ten windows of different sizes, with a minimum size of ten data points and a maximum size spanning from the minimum q value of the data set, q min , to the first q value where I(q min )/I(q) ! 10. Starting at the first point, these windows are stepped along the curve, with a step size equal to the number of q points divided by 50, and a Guinier fit is calculated for each interval (window plus start position). Each interval is assigned a quality score based on (i) how close q max R g is to 1.3, (ii) how much less than 1 q min R g is, (iii) the size of the fractional errors in R g and I(0), (iv) the r 2 score of the fit, and (v) the length of the window (longer is better). More weight is given to (iv) and (v) than to the other parameters. A total score is assigned between 0 (minimum) and 1 (maximum). The window with the highest score, assuming any score over 0.6, is selected as the Guinier fit. In addition to the automatic calculation, users may manually fit the curve, adjusting the start and end q points of the fit while watching the change in the plotted fit and the residual. This allows fine-tuning and evaluation of data quality problems or artifacts, particularly at low q where aggregation is most apparent. The fit data are saved with the scattering profile and can also be exported to a spreadsheet along with further analysis from the same and other scattering profiles. The analysis is saved when the scattering profile is saved, and can be viewed either in RAW or in a text editor. This is true for all analysis done on scattering profiles in RAW. Molecular weight RAW has a molecular weight window ( Fig. 1) which can calculate molecular weight (MW) by four methods: (i) reference of the I(0) value to that of a known standard, (ii) the absolute intensity value of I(0) (Orthaber et al., 2000;Mylonas & Svergun, 2007), (iii) the adjusted Porod volume method (Fischer et al., 2010) and (iv) the volume of correlation method (Rambo & Tainer, 2013). xS1 in the supporting information describes the validation of the implementations of methods (iii) and (iv) [for additional literature related to the supporting information see Valentini et al. (2015)]. Methods (i) and (ii) depend on an accurate measurement of the sample concentration, while methods (iii) and (iv) do not. Having multiple methods is important, as molecular weight determination from SAXS profiles is typically only accurate to $10% (Mylonas & Svergun, 2007), and different sources of error will affect the different methods. In the original description of method (iii), the regularized intensity from an inverse Fourier transform (IFT) was used for the scattering profile, including extrapolation to q = 0. The implementation of method (iii) in RAW uses the Guinier fit to extrapolate to q = 0, so an IFT is not required. The regularized profile from an IFT can still be used to calculate MW by loading the IFT into RAW, sending the associated scattering profiles to the main plot and carrying out the analysis from there. Use of a Guinier extrapolation has independently been implemented in the SAXS MoW2 calculator by the original authors of this method (http://saxs.ifsc.usp.br/). IFT determination Finding the IFT of a scattering profile in RAW can be done using two methods: a Bayesian method (Hansen, 2000), the implementation of which has been previously described (Nielsen et al., 2009) (BIFT), and a method using the regularization parameter determined by perceptual criteria implemented in GNOM (Svergun, 1992). Figs. 2 and S3 show the new BIFT and GNOM windows, respectively. The GNOM window acts as a GUI for GNOM and DATGNOM from the ATSAS package , requiring a separate installation of ATSAS to work. The BIFT method is completely automatic once initial search parameters are set, while the GNOM window runs DATGNOM when first opened, and then allows manual adjustment of the maximum dimension (D max ) and q range used. The advanced settings available for GNOM can be adjusted using the Advanced Settings GNOM panel. In both windows, the plots show the P(r) function and the data and regularized scattering profile. In the GNOM window the plots and results [such as R g and I(0)] are updated as the user adjusts the controls. Once an IFT is determined, the user closes the window and the IFT is plotted in the IFT plot and control panels discussed in x4. Manipulation history RAW tracks how a scattering profile was generated and what changes were made to the profile. For any scattering profile generated from an image, RAW tracks the configuration file used, the normalization parameters and the corrections (such as the solid-angle correction) used when integrating the image. For any scattering profile generated from other scattering profiles (such as an average profile), RAW keeps track of the scattering profiles used in the operation. For example, an average profile generated from ten individual profiles will have a history that lists the ten profiles used in the average. This history is recursively generated, so a subtracted profile generated from two averaged profiles will show the two profiles involved in the subtraction and the profiles involved in the average for each of those profiles. History is tracked for averaging, subtracting, merging, interpolation and binning. This history is visible from inside RAW and is saved as part of the footer in the .dat file as previously described. Once saved it can also be viewed in a text editor. Exporting analysis information Analysis information, including R g , I(0), D max and molecular weight, may be exported from RAW scattering profiles as a comma-separated value (CSV) file. This allows users to easily track processing and compare results from different experiments. RAW can also export file header information and selectively export analysis information as a CSV file. The CSV file type was chosen as it can be directly read into common spreadsheet programs such as Microsoft Excel and LibreOffice Calc (The Document Foundation). Online mode In online mode, RAW monitors a folder and loads in new files that are written in the folder. The basics of online mode were discussed previously (Nielsen et al., 2009). There are new features that allow custom filtering of what files are loaded and detect modified files to reload when appropriate. Loading and plotting IFT data RAW allows users to load IFT files generated by RAW or GNOM and plot them in the IFT window (Fig. S4). This window is also where IFTs generated within RAW, using either BIFT or GNOM as in x3.5, are plotted. This allows comparison of P(r) functions, and the experimental data and The molecular weight window, which can give up to four different estimates of molecular weight. Figure 2 The BIFT window for determining IFT by that method. regularized intensity can be loaded as scattering profiles in the main plot using the right-click menu. IFTs can be saved: those generated by BIFT are saved as .ift files, while those generated by GNOM are saved in the standard .out file format compatible with other programs that require that input, such as many programs in the ATSAS package. Ambiguity determination Determination of envelopes from measured scattering is not unique, resulting in ambiguity in the final shape . AMBIMETER is a program that quantifies the ambiguity of shape determination from IFT data (Petoukhov & Svergun, 2015). It is available as a utility from the ATSAS package, and RAW implements a GUI for it (Fig. S5). The window reports the number of matching shapes, the ambiguity score and AMBIMETER's assessment of the likelihood of a unique shape reconstruction. It also allows the user to save either the best-fit shape or all shapes from the AMBIMETER library that match the scattering profile. This requires a separate installation of the ATSAS package. Envelope reconstruction RAW allows users to generate envelopes from IFT data using the DAMMIF window (Fig. 3), which provides a GUI interface for the DAMMIF, DAMAVER and DAMCLUST programs of the ATSAS package Franke & Svergun, 2009;Petoukhov et al., 2012;Franke et al., 2017). RAW will start a number of simultaneous DAMMIF runs set by the user (to a maximum equal to the number of processors or requested reconstructions, whichever is less). New runs are automatically started when runs finish, until all requested reconstructions have been made. Users may choose to then automatically run DAMAVER or DAMCLUST on the output. Users control common settings from the DAMMIF window, while advanced parameters can be set in the Advanced Settings DAMMIF window. As with the other ATSAS tools, use of DAMMIF, DAMAVER and DAMCLUST requires a separate ATSAS installation. The DAMMIF control runs in a separate thread from the main RAW program, so users can carry out other processing while waiting for reconstructions to finish. 5. Working with SEC data 5.1. Loading and plotting liquid chromatography data Liquid chromatography coupled SAXS (LC-SAXS) data are data collected while the output of a fast protein/highperformance liquid chromatography (FPLC/HPLC) column flows through a SAXS sample cell (Mathew et al., 2004). Typically a sizing column is used to separate the desired sample from aggregates or other contaminants, ensuring monodisperse data collection, and this is called size exclusion chromatography coupled SAXS (SEC-SAXS). Other types of separation can be used; for example ion exchange chromatography coupled SAXS (IEC-SAXS) was recently demonstrated . For LC-SAXS, images are collected continuously while the column elutes and the eluate flows through the sample cell. A typical data set consists of initial buffer images, images from the sample in one or more elution peaks from the column, and buffer images once all of the sample has eluted. As an LC-SAXS data set may contain hundreds or thousands of images, researchers have found plotting image intensity (total intensity or intensity at a particular q value) versus image number a useful initial representation of the data (Brookes et al., 2013Graewert et al., 2015;Malaby et al., 2015;Shkumatov & Strelkov, 2015;Brennich et al., 2016;Hutin et al., 2016). As the protein scatters more strongly than the buffer, this plot is mostly analogous to a SEC UV chromatograph (plotted as time or elution volume versus absorbance). The DAMMIF window, which allows users to generate envelopes from IFT curves using the ATSAS program DAMMIF. Figure 4 The SEC-SAXS control window and plot, showing intensity versus frame number (line) and R g versus frame number (points) for BSA. The shoulder in the intensity and the increase in R g show that the monomer and oligomer did not fully separate. In RAW, users may load any set of scattering profiles as a 'SEC' curve. This allows users to plot total intensity, average intensity or intensity at any arbitrary q value versus 'frame number' (the index of the data item relative to the first data item), as shown in Fig. 4. RAW also has automated loading for SEC-SAXS data from beamlines where the file naming convention is known. At these beamlines, SEC-SAXS data can be loaded in an online mode, where the intensity versus frame number curve is updated whenever new frames are collected. Because of the open-source nature of RAW, anyone could add a new file naming convention and enable online mode for SEC-SAXS data for another beamline. Calculating R g , molecular weight and I(0) across SEC peaks After plotting intensity versus frame number, the usual next step in SEC-SAXS data processing is to calculate structural parameters as a function of frame number, allowing the scientist to assess both content and quality of the data (Brookes et al., 2013Graewert et al., 2015;Malaby et al., 2015;Shkumatov & Strelkov, 2015;Brennich et al., 2016;Hutin et al., 2016). RAW allows the user to define a set of frames as the buffer range and the number of scattering profiles to average together to calculate structural parameters. All the scattering profiles in the defined buffer range are averaged and this average is subtracted from all of the scattering profiles in the data set. A window of the defined average size is then stepped along the data, all scattering profiles within it are averaged, and an attempt is made to automatically calculate R g , I(0) and the molecular weight. In this instance, RAW uses the automatic R g determination function described earlier and the volume of correlation method (the volume of correlation method does not require sample concentration and can handle flexible proteins and RNA with default settings). For example, if the user sets a five-frame average size, the buffer-subtracted scattering profiles from frames 1-5, 2-6, 3-7 etc. would be averaged and structural parameters calculated for each of those averages. The user can choose to plot R g , molecular weight or I(0) versus frame number on the same plot as the intensity versus frame number, as shown in Fig. 4. If online mode is enabled and the user has defined the buffer range and average window size, these calculations are carried out on each new frame as it is loaded into RAW. Extracting, saving and exporting data for further analysis From SEC data in RAW users can extract individual scattering profiles of interest, save the SEC curve for further analysis in RAW or export data about the SEC curve for use in any other program. To extract individual scattering profiles, users select a range of frame numbers and send either each scattering profile or the average scattering profile of the selected frames to the main plot. Users can save the entire SEC curve as a RAW-specific .sec file, making it simple to load every profile, subtracted profile, and calculated R g , molecular weight and I(0) value back into RAW to continue analysis at a later point. Additionally, users can export all of the following data as a CSV file: total intensity, average intensity, intensity at a particular q, frame number, R g , uncertainty in R g , molecular weight, I(0), uncertainty in I(0) and filename associated with each frame number. Finally, users can export all of the frame data as scattering profiles, which can be useful if images were loaded and the users wishes to save all of the scattering profiles RAW created from those images. 6. Singular value decomposition and evolving factor analysis 6.1. Singular value analysis Singular value decomposition (SVD) is a mathematical technique that provides model-independent information on the number of unique elements in a data set. SVD has frequently been used to analyze mixture and time-resolved SAXS data (Doniach, 2001;Mertens & Svergun, 2010;Pollack, 2011;Schneidman-Duhovny et al., 2012;Blanchet & Svergun, 2013). Formally, singular value decomposition of an m  n matrix M is a factorization into three matrices such that where U is an m  m unitary matrix, called the left singular vectors; AE is a diagonal m  n matrix, where the diagonal values are the singular values; and V* is the conjugate transpose of an n  n unitary matrix V, the right singular vectors. A typical interpretation of SVD is that the number of singular values significantly above the baseline level is the number of independent components in the data set. RAW can perform SVD on scattering profiles or P(r) functions. This is typically applied to scattering profiles in a SEC-SAXS data set, and the number of significant singular values corresponds to the number of distinct scatterers in the data set. SVD done on a single well separated peak from the chromatograph would yield two significant components: one from the buffer and one from the macromolecule. For SVD done on a poorly separated monomer-dimer peak there would be three significant components: buffer, monomer and dimer. RAW allows users to select a range of scattering profiles for SVD and displays the singular values, i , and the autocorrelation of the left and right singular vectors, R i , for each ith singular value, defined as where X is the U or V singular vector matrix. Autocorrelation values and the magnitude of the singular value allow the user to interpret which singular values are significant. Fig. 5(a) shows the SVD analysis window. Evolving factor analysis Evolving factor analysis (EFA) is an extension of SVD to allow model-independent separation of scattering profiles from mixed solutions, particularly overlapping computer programs chromatographic peaks (Maeder, 1987;Maeder & Neuhold, 2007). This method was recently applied to SEC-SAXS data (Meisburger et al., 2016), and an improved version of the method described by Meisburger et al. has been implemented in RAW. EFA in RAW starts with SVD, proceeds by finding the component start and end points in the evolving factor plots, and finally rotates the significant singular value vectors into scattering profiles. RAW implements two new methods for rotation of the singular vectors besides the iterative approach of Meisburger et al. (2016). The first is the explicit calculation method described by Maeder (1987). The second is a hybrid method that uses the explicit calculation as the seed for the iterative approach, giving faster convergence of the rotation. Fig. 5(b) shows the final window of the EFA analysis in RAW, with extracted scattering profiles from overlapping peaks. EFA is different from other common deconvolution techniques, such as those implemented in US-SOMO and DELA (Brookes et al., 2013Malaby et al., 2015), in that it is a model-free approach. However, it is not without its own limitations. xS2 contains the mathematics of our EFA approaches, some remarks on validation of EFA compared with the original instance and our experience with practical limitations to the current implementation of the technique. Conclusions and future outlook RAW is a free open-source program for calibrating, masking, integrating and analyzing biological SAXS data. It provides the ability to analyze standard SAXS data and SEC-SAXS data, including Guinier analysis, molecular weight calculation, calculation of P(r) functions, singular value decomposition and evolving factor analysis, and the use of the AMBIMETER, DAMMIF, DAMAVER, DAMCLUST, DATGNOM and GNOM programs from the ATSAS package (which is closed source, requires a separate installation, and is free for academic but not industrial users). RAW is available from https://sourceforge.net/projects/bioxtasraw/. It is written in the Python and C++ programming languages and runs on all major operating systems. It has detailed documentation and video tutorials available online. Development, including new features, speed improvements and bug fixes, is ongoing, and other scientists are welcome to contribute to RAW development. In the future we intend to add major features including, but not limited to, normalized Kratky plots, a multi-detector mode to seamlessly generate scattering profiles from measurements on several detectors, improved azimuthal integration with support for detector tilt parameters, flexibility analysis, some level of automated processing of subtracted profiles and similarity testing for scattering profiles. Looking forward, data processing pipelines are becoming more popular at dedicated biological small-angle scattering beamlines. However, not all analysis can be done at the beamline, and scientists will continue to require portable analysis software that they can run at home. Furthermore, many biological experiments are done at small-angle scattering beamlines or home sources not dedicated to biological samples, and these experiments will require some kind of standalone analysis program. Thus, at least for the foreseeable future, we anticipate that RAW and programs like RAW will continue to be useful for the community. Note added in proof During review and publication of this paper several new versions of RAW were released, adding some significant new features. As of publication, the newest version is 1.3.0, and includes the following major features not described in this paper: ability to use DAMMIN (Svergun, 1999) as well as DAMMIF for envelope reconstructions and to refine averaged reconstructions; a summary window for envelope reconstructions which shows the chi squared, radius of gyration, maximum dimension, excluded volume and estimated molecular weight for each reconstruction, as well as (when available) the normalized spatial discrepancy, number of models included in the average, clustering of models, ambiguity of the reconstruction and resolution of the reconstruction; similarity testing of profiles using the CorMap test , available as a tool for the user and carried out automatically when averaging profiles; absolute scaling using glassy carbon (Zhang et al., 2010;Allen et al., 2017); and normalized and dimensionless Kratky plotting (Durand et al., 2010;Rambo & Tainer, 2011;Receveur-Bré chot & Durand, 2012).
2018-04-03T04:01:11.855Z
2017-09-05T00:00:00.000
{ "year": 2017, "sha1": "cb9600b97f452b7426c9b79e821d63741e29fa5d", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/j/issues/2017/05/00/ge5036/ge5036.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cb9600b97f452b7426c9b79e821d63741e29fa5d", "s2fieldsofstudy": [ "Biology", "Computer Science", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
4996008
pes2o/s2orc
v3-fos-license
Measurement-Device Independency Analysis of Continuous-Variable Quantum Digital Signature With the practical implementation of continuous-variable quantum cryptographic protocols, security problems resulting from measurement-device loopholes are being given increasing attention. At present, research on measurement-device independency analysis is limited in quantum key distribution protocols, while there exist different security problems for different protocols. Considering the importance of quantum digital signature in quantum cryptography, in this paper, we attempt to analyze the measurement-device independency of continuous-variable quantum digital signature, especially continuous-variable quantum homomorphic signature. Firstly, we calculate the upper bound of the error rate of a protocol. If it is negligible on condition that all measurement devices are untrusted, the protocol is deemed to be measurement-device-independent. Then, we simplify the calculation by using the characteristics of continuous variables and prove the measurement-device independency of the protocol according to the calculation result. In addition, the proposed analysis method can be extended to other quantum cryptographic protocols besides continuous-variable quantum homomorphic signature. Introduction Quantum cryptography is believed to be unconditionally secure because its security is ensured by physical laws rather than computational complexity. In virtue of no-cloning theorem and uncertainty principle, an attacker can neither distinguish between two non-orthogonal quantum states nor copy an unknown quantum state. Many quantum cryptographic protocols have been proposed based on this feature of quantum states and have been proved secure in both theoretical and experimental ways. According to the fact that a quantum system has either a discrete spectrum or a continuous spectrum, quantum information can be classified into two categories, namely discrete variables and continuous variables. Discrete-variable quantum cryptographic protocols are more widely studied but are more expensive than continuous-variable ones. Continuous-variable quantum cryptography has gained much attention for practical advantages of low cost, high efficiency and compatibility with current optical fiber communication systems. Since continuous-variable quantum cryptographic protocols are very probable to be implemented in practice, such analysis which assumes all devices are perfect is insufficient to judge whether a protocol is truly secure or not. An attacker could exploit the loopholes of a device to successfully attack a protocol even though it is proved theoretically secure. To analyze the practical security of a quantum cryptographic protocol, the definition of device independency was proposed. If a protocol can complete its task securely, even if all is greater than its standard deviation. Since the random variable follows the Gaussian distribution, the probability can be immediately obtained without calculation. Measurement-Device Independency If a quantum cryptographic protocol can complete its task securely with untrusted measurement devices, it is called a measurement-device-independent protocol. To analyze the security of a quantum cryptographic protocol under the worst case, we assume measurement devices are prepared and controlled by an attacker and can work in the way that is most favorable to the attacker. Concretely, the assumptions are: (1) An attacker can tamper and forge the output of measurement devices. (2) An attacker can eavesdrop quantum channels by any means. For simplicity, we call the above assumptions the MDI assumptions. In other words, if the task of a quantum cryptographic protocol is completed under the MDI assumptions, the protocol is measurement-device-independent. To date, there are only achievements of MDI analysis for QKD protocols. The first MDI-QKD protocol was proposed by Lo et al. [6], which is a discrete-variable quantum cryptographic protocol. The security proof utilizes the monogamous nature of quantum entanglement and removes detector side-channel attacks while it is not a mathematical proof. In the same year, Ma and Razavi [17] proposed the alternative schemes for MDI-QKD using phase and path or time encoding. In the security analysis, the lower bound of secret key rate was calculated. A protocol is secure if its secret key rate is higher than the lower bound. In 2014, several CV-MDI-QKD protocols were proposed [7]. In the security analysis, the secret key rate of an equivalent one-way CVQKD model was calculated, which is the lower bound for the proposed protocol. Calculation was simplified by applying the theorem of the optimality of Gaussian collective attacks [18]. The analysis of other CV-MDI-QKD protocols [8,9] are similar in calculating the lower bound of secret key rate. Obviously, we cannot directly calculate the secret key rate of a non-CVQKD protocol, so we should put forward a new method of analyzing its measurement-device independency. Continuous-Variable Quantum Homomorphic Signature In CVQDS protocols, there are usually at most three participants, i.e., a signer, a verifier and an arbitrator. Since the verifier and the arbitrator are assumed to be honest, the only untrusted party is the signer, so it seems easy to analyze measurement-device independency. Nevertheless, in 2017, Li et al. [19] proposed a continuous-variable quantum homomorphic signature (CVQHS) scheme, where an aggregator generates a homomorphic quantum signature for verifying the identities of multiple data sources. The aggregator has access to all quantum and classical data in the network, so the scheme probably will not be secure if an attacker takes control of the devices of the aggregator. The existence of an untrusted aggregator has posed a new challenge in analyzing the measurement-device independency of CVQDS. Li's CVQHS scheme is based on continuous-variable entanglement swapping and provides additive and subtractive homomorphism. The basic model of the CVQHS scheme is shown in Figure 1. The CVQHS scheme is defined by a tuple of algorithms (Setup, Sign, Combine, Verify) and is briefly described as follows. (1) Setup Step 1. A shares two secret keys k A 1 and k A 2 with V by continuous-variable quantum key distribution. Meanwhile, B shares two secret keys k B 1 and k B 2 with V. The secret keys are real numbers. (2) Sign Step 1. A signs its classical message a by displacing the quadratures of |α 2 . The signature x k A 2 and p k A 2 are determined by the classical message and k A 2 : Similarly, B signs its classical message b by displacing the quadratures of |α 4 . Step 2. A sends the signature |α 2 and the classical message m A to M, while B sends the signature |α 4 and the classical message m B to M. (3) Combine Step 1. M applies Bell detection on |α 1 and |α 3 and obtains the classical measurement results Step 2. M mixes |α 2 and |α 4 at a 50:50 BS and obtains two new signatures Step 3. M sends the quantum states |α 1 , |α 2 , |α 3 , |α 4 and the classical message Step 1. V measures the x quadrature of |α 2 and the p quadrature of |α 4 by homodyne detection and obtains the measurement results x and p . Step 2. V measures the x quadrature of |α 1 and the p quadrature of |α 3 by homodyne detection and obtains x 1 and p 3 . Then, V calculates where τ is the transmissivity of quantum channels. Step 3. V calculates a and b from the received classical message according to pre-shared secret keys. To verify the authenticity and integrity of the signatures, If H x ≤ H th and H p ≤ H th , V will confirm that |α 2 and |α 4 are the signatures of M and accept the classical messages a and b. Otherwise, V will deny the signatures. H th is the verification threshold. Measurement-Device Independency Analysis Method If the task of a quantum cryptographic protocol is completed under the MDI assumptions, the protocol is measurement-device-independent. The task of CVQHS is to verify the identities of different data sources at a low error rate. Thus, in the measurement-device analysis of the CVQHS scheme, we can calculate the upper bound of the error rate. If the upper bound is negligible under the MDI assumptions, the CVQHS scheme is measurement-device-independent. The upper bound of the error rate is the error rate under the worst case when an attacker can carry out any possible attack. Thus, we will find out the optimal attack model and calculate the error rate under the model. Attack Model Considering all possible cases which are shown in Figure 2, the error rate is equal to the probability of a forged signature passing verification plus the probability of a legal signature being denied. Obviously, the probability of a legal signature being denied is only affected by noise. Thus, we only consider the attack model of the case that an attacker tries to forge a signature. In the CVQHS scheme, when an attacker Eve has secret keys and is able to prepare quantum states which are entangled with those at honest signers, it can forge a signature that can pass verification. Throughout the CVQHS scheme, only the aggregator M and the verifier V use measurement devices. Here, we assume the measurement devices controlled by V are trusted because the protocol will be extremely inefficient and meaningless if the verifier is dishonest. Thus, the MDI assumptions only apply to the measurement devices controlled by M, namely a 50:50 BS and two homodyne detectors which are used to perform Bell detection, and a 50:50 BS for mixing two quantum signatures. According to Assumption (1), Eve is able to tamper and forge the results of Bell detection and the mixtures of quantum signatures at the combining phase. Thus, Eve can forge a quantum signature that can pass verification as long as it obtains the pre-shared secret keys. Thus, the security of the CVQHS scheme is guaranteed by the secrecy of secret keys. The probability of a forged signature passing verification is equal to the probability of Eve obtaining secret keys. At this point, the complicated attack model which contains forgery is simplified as a simple eavesdropping model. According to Assumption (2), Eve is able to eavesdrop all quantum channels by any means. From the perspective of an attacker's ability, eavesdropping can be divided into three types, namely coherent attack, collective attack and individual attack. Coherent attack is the most general attack by which an attacker can perform joint quantum operations and joint measurement to all quantum states sent via quantum channels. The proof of security against coherent attack is the strictest proof for security, but the model of coherent attack cannot be effectively parameterized. A common approach is to extend the security against collective attack to coherent attack by using the exponential de Finetti theorem [20]. Collective attack is a special case of coherent attack, where an attacker can only perform quantum operations individually on each quantum state. Fortunately, analysis shows that the security bound under coherent attack is the same as that under collective attack for QKD protocols [21]. This result can be applied to CVQHS because a signature in the scheme is a single quantum state. The quantum states in a quantum channel are not correlated, so introducing correlations to them by performing joint operations will not help the attacker obtain more information. Therefore, we can analyze the security against collective attack. Probability of a Forged Signature Passing Verification At the first step of the setup phase, the signers and the verifier share secret keys. Assume they use a MDI-QKD protocol in this step; then, Eve can only obtain the secret keys by eavesdropping the quantum channels. The information on the secret keys that Eve can obtain is the mutual information I(k : E), where k = (k 1 , k 2 ) denotes the secret keys and E is the quantum system of Eve. The larger the mutual information I(k : E) is, the more information Eve can obtain. When I(k : E) = H(k), Eve can recover the secret keys accurately. The upper bound of I(k : E) is usually used to estimate the security of a protocol. The quantum states in the CVQHS scheme are Gaussian states, whose von Neumann entropy can be calculated based on their covariance matrices. Assume the original entangled states prepared by the aggregator have the same density matrix, i.e., ρ 12 =ρ 34 =ρ in . Their covariance matrix is where V = cosh 2r is the variance of two-mode squeezed states. Assume the quantum channels are modeled as |α → | √ τα + where τ(0 < τ < 1) is transmissivity and |α N = |x N + ip N is thermal noise. Assume thermal noise in each quantum channel is independently and identically distributed and their quadratures follow Gaussian distribution: x N , p N ∼ N(0, V N ). After |α2 and |α4 are transmitted twice via noisy quantum channels, the covariance matrix becomes After entanglement swapping, the covariance matrix ofρ 2 4 =|α 2 |α 4 is Then, |α 2 and |α 4 are mixed at a 50:50 beam splitter, outputting |α 2 and |α 4 . Beam splitter is a Gaussian operator, which does not change the von Neumann entropy of a quantum system. Thus, the von Neumann entropy ofρ 2 4 can be calculated based on V 2 4 . S(ρ 2 4 |k A 1 ) is the von Neumann entropy ofρ 2 4 when k A 1 is given. It can be calculated based on a new covariance matrix Simple calculation shows that I(k A 1 : E) = 0, which means Eve cannot obtain any information on k A 1 . Similarly, we can calculate that I(k A 2 : E) = 0. Thus, Eve cannot obtain any information on the pre-shared secret keys between the signers and the verifier. The probability of a forged signature passing verification is the probability of Eve guessing the exact secret keys, which is negligible. In the above theoretical analysis, we only considered the case of collective attack, which is proved to be the optimal attack model. In fact, simulation or experiment considering more complex scenarios can be conducted to verify our calculation results in future works. It will be much easier to obtain the error rate for complex scenarios such as coherent attack and forgery, which involve complex modeling and calculation in theoretical analysis and cannot be efficiently parameterized [21]. Special attack models may be also implemented to discuss how parameters affect the result of CVQHS. Probability of a Legal Signature Being Denied In the CVQHS scheme, if the deviation between the value calculated from a signature and the value calculated from pre-shared messages is larger than certain verification threshold, the signature will be denied by the verifier. The deviation can be caused by an attacker or noise. Here, it is assumed that the verifier receives a signature that is generated by a legal signer and not tampered by an attacker. Thus, the probability only depends on noise. A verification threshold H th in a noisy environment is given in Ref. [19], which is equal to the variance of x V − τx V . In the verification phase, the verifier compares ( it will deny the signature. Denote x V − τx V as a random variable X whose first and second moments are EX = 0 and DX = H th . Thus, the probability of a legal signature being denied is Since X is a linear combination of quadratures, secret keys and classical messages, it follows the Gaussian distribution. According to the property of Gaussian distribution, P(X 2 > H th ) ≈ 0.32. Thus, the probability of a legal signature being denied is 0.32. By adding up two probabilities in Sections 3.2 and 3.3, we can conclude that the upper bound of the error rate of the CVQHS scheme is 0.32 when all measurement devices are untrusted. Although 0.32 is not negligible, the probability of correctly verifying the identities is twice of error rate. Thus, the CVQHS scheme is deemed to be measurement-device-independent. Discussion Firstly, we discuss how the parameters of the CVQHS scheme affect the error rate. The calculation of probability of a forged signature passing verification involves three parameters, namely the variance V of two-mode squeezed states, the transmissivity τ of quantum channels, and the variance V N of thermal noise of quantum channels. According to calculation result, the probability is always 0 provided V is nonzero, which means an attacker cannot obtain the pre-shared secret keys as long as the entangled states are properly prepared and not collapsed before being used for generating quantum signatures. Noisy quantum channels do not have any influence on the probability of a forged signature passing verification. It is the randomness of quantum states that prevents the pre-shared secret keys from being leaked during transmission. The calculation of probability of a legal signature being denied involves the values of both quadratures of entangled states, pre-shared secret keys, the transmissivity and the variance of thermal noise of quantum channels, and the verification threshold. In the calculation, the parameters follow Gaussian distribution so the probability can be easily obtained. The probability is influenced by the verification threshold H t h. If H t h is larger, the will decrease but it will be easier for a forged quantum signature to pass verification. If H t h is smaller, the probability will increase. Thus, the verification should be carefully set in order to lower the error rate. Secondly, we discuss the application of our analysis method. Our analysis method can be summarized in the following three steps: Step 1. Analyze the objective of the protocol and find the parameter that can be used to decide whether the protocol has completed its task. Step 2. Analyze the topology and the communication pattern of the protocol to obtain a simplified attack model, which may be a sufficiently studied attack. Step 3. Calculate the parameter under the attack model to judge the measurement-device independency of the protocol. In our analysis procedure, the parameter is the upper bound of error rate and the attack model can be simplified as collective attack. Although we only analyze the CVQHS scheme, the analysis method can be applied to other CVQDS protocols by means of calculating the same parameter under a similar attack model. Concretely, the objective of a CVQDS protocol is to verify the identity of a data source, which is the same as the CVQHS scheme. Thus, at Step 1, the parameter will be the upper bound of error rate as well. From the perspective of verification results, errors can be classified into two types. The first type of error is the case where a tampered or forged quantum signature passes verification. The second type of error is the case where a legal quantum signature which is not tampered by attackers gets denied by the verifier. To calculate the error rate, we should respectively construct models for the two type of errors. The first type of error usually evolves attackers so we should construct an attack model. The second type of error is caused by noise so we should also construct a model for noisy quantum channels. Constructing an attack model at Step 2 is the key step of our MDI analysis method. The most effective way of attack can be found by means of applying MDI assumptions to the protocol. Attack models may be different for different CVQDS protocols if the protocols have different network topologies and communication patterns. Since most of the CVQDS protocols do not involve an untrusted aggregator, we believe attack models for CVQDS protocols will be simpler than the CVQHS scheme. Furthermore, it seems that the attack model of a CVQDS protocol can often become an eavesdropping model because it is necessary for an attacker to obtain secret keys. After simplification, the calculation process at Step 3 will be similar to our calculation. The above analysis procedure seems to be a general formalism for analyzing measurement-device independency. In this procedure, the key point of analyzing a protocol is to find an appropriate parameter and constructing an attack model. For a complicated protocol carried out in a large-scale network, it may have several tasks that affect each other and each task is completed by several nodes. It will be difficult to find an appropriate parameter at Step 1. In addition, unintended entanglement among different nodes will not only affect the quantum states transmitted between two legal nodes in an unexpected way, but also increase the complexity of analysis and calculation. It will be difficult to construct an attack model that is simple enough for calculation. Thus, MDI analysis method of quantum cryptographic protocols except CVQDS protocols still needs to be explored. Conclusions In this paper, we analyze the measurement-device independency of continuous-variable quantum digital signature. According to the objective of CVQDS, we proposed that a CVQDS protocol is measurement-device-independent if its error rate is negligible on condition that all measurement devices are untrusted. Concretely, we take a continuous-variable quantum homomorphic signature protocol as an example. The error rate of the CVQHS scheme is equal to the probability of a forged signature passing verification plus the probability of a legal signature being denied. In the analysis procedure, we constructed an attack model in order to calculate the error rate. The attack model was simplified as collective attack by means of applying MDI assumptions to the protocol. Calculation was also simplified by using an advantage of Gaussian states, i.e., the von Neumann entropy of a Gaussian state can be calculated from its first and second moments. Calculation results show that the error rate is 0.32 so that the CVQHS scheme is deemed to be measurement-device-independent. Although we only analyzed the measurement-device independency of the CVQHS scheme, our analysis can be summarized in three steps and applied to other CVQDS protocols. Whether this approach is a general formalism for analyzing the measurement-device independency of all quantum protocols is still an open question and will be discussed in future works.
2018-04-20T22:03:32.826Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "f2da9447bffe1a08899c96b11abd8e97f47d86a9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/20/4/291/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7fbc38162ce9f8275f2e5de9cce96d0a5cefbb4", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
67783049
pes2o/s2orc
v3-fos-license
Investigating the Interaction between Disability and Depressive Symptoms in the Era of Widespread Access to ART Within the last decade we have witnessed the transition of HIV into a chronic condition in South Africa due to the rapid roll-out of antiretroviral therapy (ART) [1]. A national and government-sponsored ART-programme in South Africa has resulted in increased survival rates among people living with HIV (PLHIV) and decreased morbidity [2,3]. However, this transition comes with new health-related needs linked to the chronicity of the disease that for many will span several decades of life [4-8]. Introduction Within the last decade we have witnessed the transition of HIV into a chronic condition in South Africa due to the rapid roll-out of antiretroviral therapy (ART) [1]. A national and government-sponsored ART-programme in South Africa has resulted in increased survival rates among people living with HIV (PLHIV) and decreased morbidity [2,3]. However, this transition comes with new health-related needs linked to the chronicity of the disease that for many will span several decades of life [4][5][6][7][8]. Due to the transition from a fatal to chronic disease more attention is now being given to co-morbidities and factors that compromise adherence to ART. An early study [9] showed that 81% of patients who were 95% adherent to their medication regimen demonstrated viral suppression. This was in stark contrast with individuals who were only 80-90% adherent where only half demonstrated successful viral suppression. While early studies in Africa [10] showed great promise around ART adherence, recent literature is less promising and shows a decline of adherence to almost 20-40% in a five year period [11]. One of the factors associated to ART adherence is comorbid depression [12]. Depression and depressive symptoms are the most common psychiatric co-morbidity reported among PLHIV, with prevalence rates ranging from 25 to 36% [13]. Similar co-morbidities were found by Gonzalez and colleagues in their meta-analysis of 35,029 participants [14]. Depression is also said to have a gendered dimension, with women being 50 to 100 percent more likely to be depressed than men [15]. While ART reduces the risk of death and of developing serious opportunistic infections, there is some evidence which suggests that people living on long term ART face not only new health-related challenges including depressions but may also experience the onset of disability [1,8,16,17]. The disabling effects of living with chronic HIV have been linked to HIV itself, its comorbid conditions, and possible side effects of the medication regimen [18,19]. These may lead to a wide range of changes in body function, such as cognition, vision, hearing, and mental health. Two systematic reviews found that PLHIV of all ages in sub-Saharan Africa experience a diverse range of disabling conditions [8,20]. Literature investigating disability commonly uses the International Classification of Function Disability and Health (ICF) [21] as a guiding tool. Within the ICF, disability is understood as "an umbrella term for impairments, activity limitations and participation restrictions. It denotes the negative aspects of the interaction between a person's health condition(s) and that individual's contextual factors (environmental and personal factors)" (WHO, p. 3). These ICF components and domains can be seen in Table 1 below. Before ART rollout a high prevalence of concurrent impairments were reported, with an average of seven impairments and approximately one third of the sample experiencing more than ten impairments in the month prior to their assessment. Rusch et al. [22] reported that 80% of the cohort experienced activity limitations and participation restrictions. Limitations were also reported in another study using the ICF framework [23]. The results showed a high prevalence of physical impairments (sensory function, pain, and hypertension), participation restrictions (learning and applying knowledge, remunerative employment, and economic self-sufficiency), selective activity limitations (interpersonal relationships and interactions, and mobility), and that environmental factors (labour, employment, and legal housing services) had an influence on these individuals' level of ability. There is little investigation with regards to the link of depression or depressive symptoms and disability in the growing number of people living with HIV who make use of ART. Generally, epidemiologists have predicted that depression will soon be the leading cause of disability throughout the world [24]. Research has also revealed that depressed patients report an increased level of impairment across physical and emotional domains, much greater than people with other chronic conditions [25]. Additionally, there is a growing body of evidence indicating that depression is associated with more rapid disease progression among HIV-positive individuals [26]. Within the context of ART rollout, the associations between depression, its symptoms, and disability have not yet been investigated in South Africa. However, considering the potential impact of comorbid depression or depressive symptoms and disability on ART adherence this is an urgent concern. Thus, the primary objective of this paper is to determine the scope of disability and depressive symptoms within the cohort and investigate the association between both. Method This study is one of the primary cross-sectional surveys that were part of a larger study (HIV-Live) to investigate the intersection of disability, health and livelihood in people who are on long-term ART medication [5]. The survey included scales measuring activity limitations or disability with the WHODAS 2.0 [27], depressive symptoms with the CES-D 10 [28], HIV-related health symptoms [29], ART adherence with the Mannheimer adherence index [30], biomarkers (BMI, CD4), and livelihood outcomes using the HIV-Live study scales (Table 2) [31]. This paper reports on the primary cohort in a public health care setting in Johannesburg, South Africa. Participants and Recruitment One thousand and fifty-five adult individuals living with HIV (≥ 18 years old, 6 months and longer on ART) were recruited from large public health HIV clinic in Johannesburg between August 2014-May 2015. Sample size and sampling -Using Stata 12: The formula for one sample comparison of proportions to hypothesized values (one sample size computation). Taking a power of 90% alpha of 5% two sided test. Our population was based on 50% and our sample is hypothesized 55%. The sample size is 1050. If powered at 80% the sample size is 967 therefore this study aimed for a sample of 1000. As it was a cross sectional study participants were approached consecutively during their routine visit at the clinic. Trained interviewers conducted the survey after informed consent. Confidentiality of all information was assured through coding of the questionnaire. Participants were compensated for their travel costs. Ethical clearance was obtained from the human medical research and ethics committee University of the Witwatersrand (ethical clearance no M131187). Statistical Analysis Descriptive, correlational, and logistic regression analyses were conducted. The CES-D 10 was used as the dependent variable, with the WHODAS 2.0, age, adherence, and health symptoms scores acting as independent variables to determine the influence of these covariates on depression. For the bivariate correlational analysis, Pearson's produce moment correlation coefficient was used to analyse the associations between the variables of age, gender, WHODAS 2.0 score, adherence score, BMI, CD4 cell count, livelihood, mental health and physical health with an alpha level of 0.05. A Binary Logistic Regression, using the backward Wald method, was used to further investigate the relationship between the variables found to be significant in the bivariate analysis with depressive symptoms. This allowed for a model based on significant outcomes to be generated; variables which were not significant and/or had high collinearity were excluded from the final model. Body Structure Body Function Activities and Participation Environmental Factors Structure of The sample was also divided into two groups: those who experience depressive symptoms (EDS) (CES-D10, scoring 10 or more) and those with no depressive symptoms (NDS) (under 10) [28]. An independent samples t-test was used to compare the responses of the CESD-10 against the participant's WHODAS 2.0 score. It must be noted that there were some missing data in the dataset, however, the models and calculations used in the analyses accounted and allowed for missing data once it was categorized as such. Results Almost half (45.7%) of the sample scored two or more on the WHODAS2.0 weighted score indicating the development of functional limitations and onset of disability. Over 60% (60.8%) of the sample scored 10 or more on the CESD-10 indicating the experience of depressive symptoms. There was some gender disparity with regards to depressive symptoms, with 54.9% of males and 63% of females displaying depressive symptoms compared to 25.2% of males and 74.8% of females displaying depressive symptoms. Both genders experienced similar distributions of relationship status, with 27.1% of the NSD group being married at the time of data collection, as opposed to only 22% of the EDS group. In the NSD group, 67.4% of the individuals earned an income, whilst only half (50.38%) of the ESD group earned an income ( Table 3). The survey also revealed a diverse experience of functional limitations with scores in all domains (mobility, life functionality, cognition, participation in social activities, self-care and getting along with other; WHODAS). Figure 1 illustrates the percentage of participants who scored two or more on the WHODAS 2.0. The figure also illustrates higher percentages of participants who experienced functional limitations in the group with depressive symptoms than in the one without ( Figure 1). There was a statistically significant difference between the Overall score converted to a metric of 0-100%. Where 100=full health and 0 experiencing all symptoms. Physical and functional health Comprises ten questions prompting Status as far as depression, level of bother, how fearful, hopeful, happy or lonely one is and the effort required to get going. 2008) A sum of the scores was taken and a cut off of 16 and above was considered at risk of depression. A converted summary score is calculated with a metric of 0-100 (100 is equal to no symptoms of depression and 0 depicts severe symptoms of depression). Depressive symptoms were associated with health symptoms (r=0.412), functional limitations (r=0.272), age (r=0.146), and gender (r=0.109) but not with clinical outcomes such as BMI (r=0.001) or CD4 count (r=0.056). Hence, people with more health symptoms, higher functional limitations, older age, and female gender are more likely to experience depressive symptoms. ART adherence was not associated to depressive symptoms but was associated to functional limitations (r=-0.133), with lower adherence scores being more likely with higher functional limitations scores. The bivariate analysis suggested that gender, age, health symptoms, and functional limitations predicted depressive symptoms. We used a logistic regression model (Table 4) to further investigate the effects of age, gender, level of ART adherence, WHODAS2.0 weighted disability score, and the health symptoms on outcomes for depressive symptoms. Gender and adherence score were initially included as independent variables, but were not significant using the model (p=0.866; p=0.313) and were therefore excluded from the final model under the Wald criterion. The logistic regression model was statistically significant, χ² (3)=199.63, p<0.001. The model explained 23.7% (Nagelkerke R²) of the variance in depressive symptom classification, indicating a moderate relationship between prediction and grouping. Prediction success overall was 69.1% (Table 5). Discussion We have investigated the link between depressive symptoms (CESD-10) and functional limitations/disability (WHODAS 2.0) in people living on long-term ART medication. The majority of participants experienced at least one functional limitation, further substantiating the findings from previous studies [1,[5][6][7][8]16,17,23,31]. Participants who experienced depressive symptoms had higher levels of functional limitations, mirroring the findings from its sister study in a semi-rural area [5]. Disability literature has already established a close link between the experience of disability and depression [21]. Results from our study support the notion of a triple burden of health-related needs in people living with HIV and on long term ART, relating specifically to functional limitations/disability, depressive symptoms, and HIVrelated comorbidities. The direction of the association is complex in that the effect of functional disability on depressive symptoms can be bidirectional; i.e. functional limitations can be driven by depressive symptoms or other socio-economic drivers, or depressive symptoms can change the way we function. Our model revealed that the strongest predictors of depressive symptoms were health symptoms, followed by disability. It is well established that medical conditions increase the risk of disability, which in turn may influence the depression status of an individual. However, disability can also influence health seeking behaviour, hence increasing health disparities and the risk of depression [32]. Our model found that increased health conditions and functional limitations as well as older age were predictors of depressive symptoms; however the directionality of these associations is still unclear. In addition, the literature reveals a close link between these predictors and the risks of poverty and reduced quality of life, a finding supported by our sister study [33]. Further research needs to examine these associations while accounting for the complexity of the triple burden and its impact on HIV outcomes such as ART adherence, health outcomes, and livelihoods. Considering the high incidence of disability and depressive symptoms in both of our studies we also need to understand how health care and social services can mitigate both [5]. The results of this research corroborate Nomoto et al's [13] findings, with the majority (61%) of the sample of PLHIV experiencing depressive symptoms. Additionally, while not as extreme as that reported by Gonzalez et al. [14], the results suggest a gendered dimension influencing the experience of depressive symptoms, with approximately 63% of woman being depressed, compared to 55% of men. Our findings also support those reported by Rusch et al. [22] and Katon [32]. We found that, on average, participants experienced several physical health problems at the time of data collection, with participants in the EDS group experiencing significantly more physical health problems than those in the NDS group. Lastly, our findings supported the results of previous research with regards to disability and depressive symptoms or depression [5,23,25]. In every sub-domain of the WHODAS (with the exception of "getting along with people") participants who experienced depressive symptoms also reported more pronounced functional limitations. In contrast to the findings reported by Beusterien et al. [12] as well as our sister study [5] which both suggested that adherence and comorbid depression or depressive symptoms are correlated, we found no such association in this study. The reason that other variables, such as socioeconomic or livelihood outcomes, which intersect with depressive symptoms (as illustrated by the bivariate correlation) were not included in the mode is because they did not alter the odds of an individual displaying depressive symptoms [34,35]. Additionally, we found no evidence that depressive symptoms were associated with time since first HIV diagnosis [26], however, the difference in health symptoms between the two groups suggests that there could still be a link. It is important to note, though, that the average participant in this study had been living with HIV for approximately seven and a half years and had been on ART for approximately seven years before participating in this study. This length of time may have had an influence on the outcome of the results due to the association between time on ART and depressive symptoms. Considering the high incidence of depressive and health-related symptoms, the onset of disability indicated in this sample, and its potential impact on treatment adherence and livelihoods, we need to consider how a continuum of care can cater for these new healthrelated needs of people on long term ART. The National Mental Health Policy Framework and Strategic Plan (2013-2020) recognise that the relationship between mental illness and HIV is complex, in that both conditions are exacerbated and negatively impact each other [34]. Thus, the framework challenges the mental health service delivery platform to ensure that both community and district based models are developed and implemented to meet the needs for all mental health care users, especially those who have complex comorbid diagnoses. The UN Strategy for HIV, in target 11 and 19, refers to enabling and funding community based responses in key priority populations [36]. A manifestation of depressive symptoms and disability in the HIV disease profile calls for a shift in our thinking that includes rehabilitation approaches within HIV treatment, care, and support. An increased level of support from governmental and non-governmental organisations is needed to change policy and practice in order to accommodate the needs arising from living with chronic HIV. PLHIV require access to treatment programmes which can mitigate the impact of both disability and depressive symptoms. Thus, there is a need to include disability in health priority programmes and research. The South African policy and programme documents has recently acknowledged this need [34,37]. Similarly, the countries National Strategic Plan on HIV (2012-2016) [38] includes, in Objective three, the goal to promote wellness and prevent coinfections such as TB or disability. However, there are no implementation recommendations or targets within the NSP and its operational plan on how to mitigate disability. More recently, South Africa has released its new Framework on disability and rehabilitation, which includes a section on HIV. While South Africa is reviewing its state of readiness to implement this framework, it needs to consider how it can cater for the millions of people already living with HIV who may be at risk of developing disabilities and depression. Within the wider Southern African region UNAIDS sets, with its ambitious targets in the HIV Fast Tracking Strategy (90:90:90) [39], goals for 2030 that will not be achieved without including disability in the continuum of care as disability holds the risk of impacting health seeking behaviour, adherence to ART and livelihoods. Conclusion A growing body of evidence suggests that depression, depressive symptoms, functional limitations, and disability are linked to living with chronic HIV. Our work highlights the additional link between depressive symptoms and disability, both of which have the potential to affect the outcomes of living with chronic HIV negatively. In addition, it reaffirms that older age and increased health symptoms are linked to disability. In contrast to previous research, when included in the regression model, gender and ART adherence were not found to be significant contributing variables to depressive symptoms when other covariates (disability, age, physical health) were taken into account. This indicates that as people grow older and live longer with HIV the need for rehabilitative services to mitigate both depressive symptoms and the onset of disability increases. It is likely that there is two-pronged path regarding the link between depressive symptoms and disability. The experience of disability has a debilitating effect on an individual's ability to manage their social and professional life, lowers their ability to actively participate in society as well as their ability to work. This has a far reaching impact on the individual's socio-economic status and ability to cope, but also on the wider society through its potential impact on ART adherence and economic growth. While this research study only used a cohort of people on long term ART, the measured experience of functional limitations is with around 50% of the sample much higher than the prevalence of disability measured with the last South African census (7.5%, 2011). Although the two data sets use two different measures (WHODAS 2.0 and Washington set of questions) we assume that the prevalence in our sample is still disproportionally high and further research is needed to better understand and measure disability within populations on ART in Africa. In order to better understand this link we need both disability focused and disability mainstreamed research. Thus, research focusing on health priorities such as adherence to ART, the ideal continuum of care for people living with HIV, and comorbidities need to include disability within their research design. Such research will provide us with a better understanding of the prevalence and impact of disability on priority health programmes, the scope and types of disabilities experienced by people living with chronic HIV, and their needs for rehabilitation. Limitations The results of this study provide information on the risk of depression through a well powered study. The nature of the study design cannot provide information on direction of causality. The very nature of disability is complex and has multidirectional relationships with other factors. As such, there is need for further qualitative research to understand the more explorative questions that emerge.
2019-03-16T13:04:38.183Z
2016-06-04T00:00:00.000
{ "year": 2016, "sha1": "76b5e014304fb2132c18b3af50ffd3badaa892a0", "oa_license": "CCBY", "oa_url": "https://www.omicsonline.org/open-access/investigating-the-interaction-between-disability-and-depressive-symptoms-inthe-era-of-widespread-access-to-art-2155-6113-1000584.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6679d40a22b1867aae8d3183aa243bf1befad316", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11224712
pes2o/s2orc
v3-fos-license
Brain size affects female but not male survival under predation threat There is remarkable diversity in brain size among vertebrates, but surprisingly little is known about how ecological species interactions impact the evolution of brain size. Using guppies, artificially selected for large and small brains, we determined how brain size affects survival under predation threat in a naturalistic environment. We cohoused mixed groups of small- and large-brained individuals in six semi-natural streams with their natural predator, the pike cichlid, and monitored survival in weekly censuses over 5 months. We found that large-brained females had 13.5% higher survival compared to small-brained females, whereas the brain size had no discernible effect on male survival. We suggest that large-brained females have a cognitive advantage that allows them to better evade predation, whereas large-brained males are more colourful, which may counteract any potential benefits of brain size. Our study provides the first experimental evidence that trophic interactions can affect the evolution of brain size. INTRODUCTION Brain size variation is ubiquitous in the animal kingdom (Striedter 2005) and it is often suggested that ecologically adaptive variation in brain size is maintained by selective trade-offs. For example, larger brains enhance cognitive ability, whereas increased brain size also imposes large energetic demands that can override the cognitive benefits or even favour smaller brains (Aiello & Wheeler 1995;Kotrschal et al. 2013a). Several comparative studies have shown that brain size and behaviours indicative of cognitive ability are positively associated (Tebbich & Bshary 2004;Overington et al. 2009;Reader et al. 2011;MacLean et al. 2014). Recent experimental evidence further corroborated the link between a larger brain and improved cognitive abilities because replicated selection lines of guppies (Poecilia reticulata) bred for large brain size performed better in tests of cognitive ability than selection lines bred for smaller brains [Females: (Kotrschal et al. 2013a); Males: (Kotrschal et al. 2014a)]. However, brains are among the most energetically costly organs in the vertebrate body (Raichle & Gusnard 2002). The high energetic costs of brains have been shown with direct metabolic measurements (Raichle & Gusnard 2002) and there are also evolutionary trade-offs between brains and other metabolically costly tissues (Navarrete et al. 2011;Kotrschal et al. 2013a;Tsuboi et al. 2014). Taken together, these studies provide compelling evidence that increased brain size improves cognitive ability, but also imposes high energetic costs. For selection to favour the evolution of increased brain size, the cognitive benefits must therefore outweigh the high energetic costs (Striedter 2005). The problem is that it is unclear whether or how selection favours increased brain size. Therefore, our aim was to conduct an experiment to determine how brain size affects fitness, as part of a larger study on the evolution of brain size in guppies. Several studies have shown that brain size is heritable [e.g. h 2 in guppies is around 0.63 (Kotrschal et al. 2013a)]; however, to date, the only evidence that larger brains confer fitness benefits come exclusively from comparative studies. For example, large-brained bird species have higher survival in the wild (Sol et al. 2007) and they are better at colonising urban environments (Maklakov et al. 2011; compared to small-brain species. Also, in several taxa large-brained species [mammals (Sol et al. 2008), birds (Sol & Lefebvre 2000), reptiles (Amiel et al. 2011), but not fishes (Drake 2007)] are more likely to establish viable populations after introduction events compared to small-brain species. The cognitive buffer hypothesis explains those patterns by suggesting that larger brains buffer individuals against environmental challenges by facilitating the construction of behavioural responses, which in turn increase survival (Allman et al. 1993;Deaner et al. 2003;Sol 2009). To complement studies on macroevolutionary patterns, studies on fitness within species are needed, and especially with experiments that can assess causality concerning the selective consequences due to variation in brain size. We therefore performed a test to determine how experimental changes in brain size affect survival under predation. We used guppy selection lines that had been artificially selected for either large or small relative brain size. These selection lines differ by up to 13.8% in brain size relative to body size (Kotrschal et al. 2014a) and, because body size does not differ between lines, they also differ in absolute brain size (Kotrschal et al. 2013a(Kotrschal et al. , 2014a. Large-brained individuals from these selection lines have been shown to outperform small-brained individuals in tests of cognitive ability (Kotrschal et al. 2013a(Kotrschal et al. ,b, 2014a. Large-brained individuals of both sexes are more exploratory and show a decreased hormonal stress response (Kotrschal et al. 2014b). Additionally, large-brained males are more colourful than small-brained males (Kotrschal et al. 2015), and though colouration enhances male mating success, it also increases conspicuousness to predators (Endler 1980). To compare the fitness consequences of selection for increased brain size, we conducted a competition study to test how large-and small-brained individuals from these selection lines differ in survival when exposed to naturalistic predation pressure. We constructed large, replicated semi-natural 'streams' in which we established mixed populations of marked large-and small-brained animals. In weekly censuses, we then monitored survival in the presence of a natural guppy predator, the pike cichlid (Crenicichla alta) until a predefined criterion of 50% survival of the populations was met. Assuming larger brains improve predator avoidance, we expected large-brained females to have higher survival under predation pressure, unless the energetic costs are over-ridden by such benefits. We had no particular prediction for males because large-brained males are more colourful, and therefore they may be more conspicuous to predators than small-brain males (Endler 1980), which may result in no advantage or a survival disadvantage. Directional selection on brain mass We examined the relationship between brain size and survival in laboratory lines of Trinidadian guppies that were artificially selected for large or small relative brain size (Kotrschal et al. 2013a). We used laboratory descendants of wild guppies (P. reticulata), whose founders (> 500 individuals) were imported in 1998 (caught in the lower regions of Quare river, Trinidad) and since then kept in large populations (> 500 individuals at any time) where they were allowed to reproduce freely. Starting in 2011, the brain size selection lines were generated using a standard bidirectional artificial selection design that consisted of two replicated treatments (three independent up-selected lines and three independent down-selected lines). Since brain size can only be quantified after dissection, we allowed pairs to breed at least two clutches before sacrificing the parents for brain quantification. We then used the offspring from parents with large or small relative brain size to breed the next generation. More specifically, to select for relative brain size, we selected on the residuals from the regression of brain size (mass) on body size (length) of both parents. We started with three times 75 pairs (75 pairs per replicate) to create the first three 'up' and 'down' selected lines (six lines in total). We summed up the male and female residuals for each pair and used offspring from the top and bottom 20% of these to form the next generation parental groups. This means we used the offspring (two males and two females) of the 15 pairs with the largest residual sums for upselection and of the 15 pairs with the smallest residual sums for down-selection for each generation. To avoid inbreeding, full siblings were never mated. See Kotrschal et al. (2013a) for full details on the selection experiment. The selection lines differed in relative brain size by 9% in F2 (Kotrschal et al. 2013a) and in a subset of males of F3 by 13.8% (Kotrschal et al. 2014a), while body size did not differ between the lines. Overall, the effect of selection on brain size was not different between the sexes (Kotrschal et al. 2013a). Previous to the survival experiment, all fish were housed in 50-L-tanks, separated by brain size selection line, sex and replicate, containing 2 cm of gravel with a biological filter and java moss. The laboratory was maintained at 26°C (resulting in 25°C water temperature) with a 12 : 12 light : dark schedule. Fish were fed a diet of flake food and freshly hatched brine shrimp 6 days per week. The survival experiment The experiment was performed in a segmented glass ring tank (outer/inner diameter: 7.3/5.3 m), which we compartmentalised into six same-sized parts by inserting opaque PVC sheets, enabling us to create six replicate 'streams' of 3.08 m 2 each. Our streams were based on an elegant design used by Endler (1980). Strips of filter-foam between the sides of the ring tank and the PVC sheets held the sheets in place and allowed some water flow between streams. Our aim was to recreate the natural environment of guppies in Trinidad. We, therefore filled the streams with a layer of coarse rounded (naturally) multicoloured lime stone gravel (3-8 mm) with which we crafted areas of different depths. The created water depths ranged from 0.5 to 40 cm (gravel depth: 3-40 cm) with relatively even gradients of c. 30°between different depths. We installed two Eheim filter pumps to create filtration and water flow (2400 L 9 h À1 per pump) from the shallow to the deep area (the effective stream length from the outlet of the filter in the most shallow area to the deepest point was 4.0 m; Fig. 1). The shallow areas provided refuges for the guppies in which the pike cichlid could not hunt (Endler 1980). We also placed an additional refugium [white PVC box 40 9 30 9 20 cm, gravel on bottom] with one round 10 cm wide opening in the shallowest area. The box, in which the hose that carried water from the filter from the deepest area ended, was partly submerged and fish could enter and exit it freely. We added java moss (Taxiphyllum sp.) and water snails (Planorbis sp.) as natural destruents of organic waste. Electric heaters kept the water temperature at 25°C. Fish were fed once daily (in the morning) by scattering a near ad libitum ration of flake food and freshly hatched Artemia over the deeper areas of the stream. The amount of food was adjusted so that most food would be depleted within 3-4 min of feeding. The water current quickly dispersed the food to all areas of the streams; the flakes slowly sank to the bottom while the Artemia remained in the water column. Animals could thus feed from the surface, the water column and the bottom. During the feedings, we never observed any predator activity. Lighting followed a 12 : 12 schedule with bright half-hour periods of increasing and diminishing brightness, simulating dusk and dawn. During the night, faint lights simulated moon light. At the start of the experiment (week 0), we stocked each stream with 800 fully mature, adult guppies (mean age: 220 AE 30 days, virgin and na€ ıve to predators) balanced over sex and brain size, which had been tagged with visible implant elastomere tags 3 weeks earlier. Large-brained animals were marked with a green and red dot on the left and right body side respectively just below the dorsal fin. Small-brained animals were marked similarly but with the green dot on the right side and the red dot on the left side. The first 6 weeks following introduction, the animals were allowed to acclimatise to the novel environment. At week seven, we counted all marked fish (794 AE 2 per stream, mean 6-week survival probability per stream: 99.1%; Fig 2; results section below), added one adult pike cichlid (C. alta, body size: 9.9-15.7 cm) per stream and stocked the deepest area of the stream with three clay pipes as shelter for the predator. Crenicichla are pike-like carnivores with 'ambush and stalk' hunting strategies, they are often sympatric to the guppy and can impose a high predation pressure (Houde 1997;Johansson et al. 2004). In the used setting, up to three predation events per hour per cichlid can be expected (J. Endler, personal communication). The fish were wild-caught and imported via the aquarium trade. Although the exact location of origin is unknown, and it is therefore impossible to know whether those individuals had lived with guppies in sympatry, it is safe to assume that they had foraged on small fish before (Johansson et al. 2004). Six weeks prior to introduction to the streams, cichlids were fed exclusively on live guppies and all individual predators consumed them readily. We conducted weekly censuses of all marked fish until a predefined criterion was met where the last of the four subgroups (large-and small-brained males and females) reached a mean of 50% survival in all streams. This survival criterion was met at week 20 ( Fig. 2). At this time, fish were ca. 7 months old (mean age: 218 AE 30 days). Because one predator showed signs of stress (hiding and very little feeding) at weeks 12-15, we replaced it at week 15 with another one. The distressed predator fully recovered in its private tank. Another predator was depleting the guppy population at twice the rate than the other predators during weeks 12-15. We, therefore, food-supplemented this predator after the census at week 15 with one dead adult guppy from the pet shop every second day to keep predation rates comparable between experimental streams. Statistical analyses To determine whether relative brain size influences survival under semi natural-conditions, we used two complementary approaches. First, we assessed potential differences in survival time using a proportional hazards-based mixed-effects Coxregression model, which utilised all census data (Harrell 2001). Second, we determined the survival probability at the end of the experiment with a generalised linear mixed-effect model (GzLMM), for which we used only individual numbers at the beginning and at the end of the experiment. week 0 5 10 15 20 Figure 2 Timeline of the experimental procedure to determine the relationship between brain size and survival in guppies. The small black arrows indicate whole-population censuses, the grey arrow indicates introduction of the guppy predator, a pike cichlid. The weekly censuses stopped after a predefined 50% survival criterion was met at week 20. Survival duration As dependent variable in the Cox-regression we used individual survival (was it present/absent) at every census; as fixed factors sex (males/females), brain size selection regime (small/ large) and their interaction, and as random effects we included replicate nested in stream (three replicates, two streams per replicate). We also analysed male and female survival separately, analogously to the model described above, but without sex in the model as guppies show a high degree of sexual dimorphism in colouration, size and behaviour (Houde 1997), which could potentially result in pronounced sex differences in survival. Sex-specific effects on a range of traits were also found in the brain size selection lines (Kotrschal et al. 2012(Kotrschal et al. , 2013a(Kotrschal et al. , 2014b. This part of the analysis was done using the 'coxme' package in R (Development.Core.Team, R. 2006;Therneau 2009). Survival likelihood Since most, but not all fish had survived the 7-week acclimation period before introduction of the predator (average per tank survival: 99.1%, see above and results below) and we were interested in the survival after the predator was introduced (week 7) until reaching the 50% criterion (week 20), we used a binary probit-link GzLMM to analyse survival at the end of the experiment, with the number of fish present at week 20 as dependent variable and the number of fish present at week 7 as independent variable. We used sex and brain size selection regime as fixed effects and replicate as random effect, analogously to the models described above. Similarly, we analysed survival of both sexes first in a combined model, and then in two sex-specific models. For survival in the first 6 weeks predator-free acclimation period, we used an analogous general linear mixed model (GLMM) with the number of not re-found fish as dependent variable. These analyses were done in SPSS 22.0, SPSS Inc., Chicago. Ethical note Breeding and marking of experimental fish comply with the Swedish law and were approved by the Uppsala ethics committee. Animal care procedures during the predation experiment were discussed and approved by the Veterinary University of Vienna's institutional ethics committee in accordance with good scientific practice (GSP) guidelines and national legislation. RESULTS After the 6 weeks predator-free acclimation period, 57 individuals were not re-found and likely died of natural causes. There was no difference between individuals from large-and small-brained selection lines in the number of missing individuals, but a higher male survival. (GLMM: brain size selection regime: F = 0.08, P = 0.777; sex: F = 6.75, P < 0.019). When analysed separately, there was also no difference in initial survival between large-and small-brained females (GLMM: brain size selection regime: F = 0.57, P = 0.484). After the predators were introduced into the streams, the numbers of guppies declined steadily and the survival criterion was met for all four subgroups at week 20 (Fig. 3a). Although we did not systematically observe pike cichlid hunting behaviour, our observations suggest that guppies were captured during the day by striking individuals passing by the predators' dayroost, whereas during dusk and dawn guppies were captured by active pursuit. Survival likelihood The mean survival probabilities at week 20 (estimated means from a GzLMM; AE SE) were as follows: small-brained males: 40.0 AE 7.3%, large-brained males: 38.3 AE 7.2%, small-brained females: 43.6 AE 7.5%, large-brained females: 50.2 AE 7.6%. Overall, females had higher survival than males and there was a significant sex 9 brain size selection regime interaction (GzLMM: brain size selection regime: F = 2.61, P = 0.122; sex: F = 27.57, P < 0.001, brain size selection regime 9 sex: F = 7.82, P = 0.011; see the endpoints of Fig. 3a and b). When analysing the sexes separately we found no difference in the numbers of large-and small-brained males that had survived until week 20 (GzLMM males : brain size selection regime: F = 0.68, P = 0.428). However, we found that large-brained females had on average 15.1% higher survival than small-brained females (GzLMM females : brain size selection regime: F = 10.01, P = 0.010). DISCUSSION Female guppies from large-brain selection lines were more likely to survive, whereas we found no survival benefits for large-brained males in our study. We suggest that enhanced predator evasion is the most likely explanation for the survival benefit for large-brained females, and below we explain why factors like ageing and some other alternatives can be ruled out. Overall, females had better survival than males, which is consistent with previous studies showing male body colouration increases their conspicuousness to predators. It is unclear why large brains did not improve male survival, and below we explain why the enhanced colouration of large-brained males likely increased their vulnerability to predation. We also discuss how our results support the hypothesis that survival under predation is an important selective force in the evolution of vertebrate brain size and the general implications of this finding. Females from large-brain selection lines may live longer than those from short-brain lines (as a correlated trait in the selection lines), similar to mammal species with relatively larger brains typically living longer (Hofman 1993). But differences in senescence are not likely to explain our results for several reasons. First, in the few fish that died during the 6 weeks of predator-free acclimation period there was no difference between large-and small-brained females. Second, at the last census, the experimental fish were between 300 and 360 days old and guppies have a longer minimum natural life expectancy (around 400 days) in the absence of predation (Reznick 1983). Third, 95% of the parents of the experimental fish, which we keep in our laboratory for a longevity assay, were still alive when they were 1 year old [18/360 individuals (9 pairs), with no differences between groups of different brain sizes; Binomial test: large-vs. small-brained males: P = 0.30, large-vs. small-brained females: P = 0.63]. Fourth, large-brained females may be better foragers and thus survive longer. However, our near ad libitum feeding renders mortality due to starvation implausible. Thus, we interpret our results to be driven by predation since it is highly unlikely that the reduced survival of large-brained females was due to differences in senescence or starvation patterns between the large-and small-brained females. The improved survival of large-over small-brained females in our study was most likely due to the cognitive improvements in large-compared to small-brain lines (Kotrschal et al. 2013a) that enabled them to better avoid predation. Throughout the experiment, the pike cichlids usually sat hidden in clay pipes in the deepest part of the streams, striking at fish passing by. Guppies show predator inspection (Dugatkin & Godin 1992), which may function to obtain information about the predator's state and to demonstrate to the predator that it has been detected (Pitcher 1992). Predator inspection seems to deter predators in some cases (Godin & Davis 1995), but also increases mortality risk in others (Endler 1980;Dugatkin 1992). A change in cognitive ability may impact this behaviour in several ways. Larger brained animals may be faster at gathering and integrating information about the predator's state and therefore inspect for a shorter time. Such improved learning abilities are thought to be key for increased survival in response to predation (Brown & Chivers 2005). Also they may remember previous inspection events for longer and therefore inspect at lower rates. Whether differences in brain size indeed relate to differences in predator inspection behaviour will be clarified in future experiments. Females had increased survival over males in our study, and sex differences in body size, swimming ability and colouration could explain this result. Body size and swimming ability are major determinants of survival of fish in nature, such that larger (Sogard 1997) and faster swimming (Houde 1997;Plaut 2001) individuals usually survive longer under predation. In guppies, females are both considerably larger in size (Houde 1997) and able to swim faster (Kotrschal et al. 2014b) than males. However, better survival of larger compared to smaller brained females cannot be attributed to either of those two factors, since there are no within-sex effects in neither body size (Kotrschal et al. 2013a), nor swimming speed in the brain size-selected lines. Sex differences in body colouration may have also contributed to males' lower survival, by increasing their predation risk. Conspicuous body colouration enhances male mating success, but it also increases their risk of predation (Fischer 1930;Andersson 1994;Houde 1997;Reznick et al. 2004). This adaptive trade-off is particularly well-documented in guppies. In his classic paper, Endler (1980) showed that introduction of a piscivorous predator fish into naturalistic ponds with guppy populations rapidly decreased the colourfulness of the males in those ponds via colour-dependent predation over two generations. Thus, male survival declined faster than females in our study, which may have been due to their being more conspicuous to predators, as well as having smaller body size and slower swimming abilities than females. The main problem is explaining why large-brained males in our study did not have a survival advantage over small-brained males. This result was unexpected, but a recent finding provides a potential explanation. Large-brained males in these selection lines are more colourful than the small-brained males (likely due to a genetic correlation between brain size and colouration; (Kotrschal et al. 2015), and therefore, their increased conspicuousness to predators may have overridden the benefits of having a larger brain. Another recent study with Drosophila melanogaster found evidence that sexual selection can enhance cognitive performance (Hollis & Kawecki 2014), whereas our findings suggest that enhanced colouration can override the survival benefits of improved cognition (or other benefits from large brains). Here, we experimentally tested the survival benefit of relative brain size under as natural conditions as possible. The size and topography of the streams and the species of predator closely mimicked the natural situation, while food abundance and stocking densities may be considered slightly higher. Also, in the wild, several different species may prey on guppies (Houde 1997). Arguably, the harsher conditions in the wild may even amplify any brain size-dependent survival differences if large brains enable both better predator evasion and more efficient foraging strategies. Predation is common in guppies and other vertebrate species [examples in (Endler 1986)], and predators may drive selection for larger brains within and between vertebrate species (Kondoh 2010). Several comparative studies have found that largebrained bird species show higher survival in the wild (Dugatkin 1992;Sol et al. 2007) and are better at colonising urban environments (Maklakov et al. 2011;, while large-brained mammals are more likely to establish viable populations after introduction events (Sol et al. 2008). Variation in predation pressure may also underlie brain size variation at the within species level because it can vary between populations of the same species. For instance, in the guppy's natural habitat, waterfalls often create natural barriers that exclude piscivorous predatory fish from areas above the waterfalls (Seghers & Magurran 1995). This reproductive isolation has led to extensive ecologically driven differentiation in morphology, behaviour and life history between fish populations inhabiting upper and lower stream areas (Reznick & Endler 1982). In the closely related Poeciliidae species Brachyraphis episcopi, fish from low and high predation sites differ in learning ability (Brown & Braithwaite 2005). We thus predict that such predation differences also select for differences in brain morphology. The fact that larger brains come at a cost likely restrains the evolution towards larger brains under predation. On the individual level, the high energetic costs of brain tissue may force its bearer to forage more, thereby exposing itself for longer to predation threat (Brown 1999). This cost may be offset by a foraging advantage since larger brains can also be associated with more efficient foraging. By applying an ad libitum feeding regime in our artificial streams we likely reduced the potential for energy-restricted predation pressure. On the population level, known trade-offs likely restrict brain size evolution (Kotrschal et al. 2013a;Tsuboi et al. 2014). We have previously shown that individual guppies from the large-brained selection lines produce c. 15% fewer offspring than individuals from the small-brained lines (Kotrschal et al. 2013a). Therefore, small brains and higher reproduction may be successful in low-predation environments with high abundance of food, while large brains and low reproduction may be more successful in highpredation environments with lower or patchier food resources. Variation in food abundance and species interactions (i.e. predation pressure) may thus have been important ecological factors behind the remarkable variation that exists in brain size among contemporary vertebrates. In conclusion, our study provides experimental support for the long-standing hypothesis that natural selection favours individuals with larger brains, at least under certain conditions. We suggest that a change in brain size may impact predator evasion strategies via changes in cognitive ability. Our study identifies predation pressure as a key selective pressure in the evolution of brain size in natural populations.
2018-04-03T04:56:41.041Z
2015-05-10T00:00:00.000
{ "year": 2015, "sha1": "bba79cec04810381864a4a5587ef5115a59e4997", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ele.12441", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bba79cec04810381864a4a5587ef5115a59e4997", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
186838675
pes2o/s2orc
v3-fos-license
ORTs in an Infertile Woman – A Comparative Study Background: There exists a lacuna in the assessment of ovarian reserve in an aged infertile woman, which reflects their reproductive potential. Objective: To compare and evaluate the role of serum FSH, LH, E2, Inhibin B and AMH with the ovarian follicle status in an infertile woman. Material and Methods: A comparative observational and cross-sectional study was done in 120 females aged 18 to43 years fulfilling the inclusion and exclusion criteria. Venous blood (5 ml) collected from subjects for measurement of serum FSH, LH, E2, Inhibin B and AMH on day 3 rd of menses. Serum FSH, LH, E2 and AMH levels were measured by chemiluminescent immunometric assay. Serum Inhibin B estimated by ELISA method. Antral Follicle count (AFC) and Ovarian volume (OV) was measured by ultrasonography. The serum values of FSH, LH, E2, Inhibin B and AMH were correlated with AFC and OV. P value <0.05 was considered significant. Results: Our study showed positive correlation exist between the serum levels of FSH, LH, E2 (p< 0.001). Inhibin B values raised in <20yrs and >40yrs age groups. Serum AMH, AFC and OV values showed significant decrease with increasing age(p<0.001). The serum AMH and OV was positively related to AFC status while serum FSH, LH, E2 values was negatively related to the AFC status. Age was negatively related to AFC status. Conclusion: AMH test predicts the reproductive potential of an aged infertile women (>35 yrs.) and reduce the use of Assisted Reproductive Techniques (ART). Introduction The trend towards later motherhood, increasing age of marriage and the increasing reliance on assisted reproduction technique, has emerged the need for establishment of more reliable tests to dictate the ovarian reserve (OR). OR is nothing but functional potential of ovary, which constitutes the size of the ovarian follicle pool and www.jmscr.igmpublication.org Index Copernicus Value: 79.54 ISSN (e)-2347-176x ISSN (p) 2455-0450 DOI: https://dx.doi.org/10.18535/jmscr/v7i3.111 reflects the number and quality of oocytes, within it 1 . Thus, an assessment of OR helps in reflecting the reproductive potential of women. Various markers like serum FSH, Serum LH, Serum E2, Serum Inhibin B, Serum AMH, AFC and ovarian volume are available for assessing the OR 2-5 . Then hormonal tests show intercycle dependent and interdependent values, while ultrasonic markers cause interobserver variation. AMH and AFC are considered to be equally predictive of poor ovarian response, however AMH is considered advantageous over AFC because AMH concentration can be measured at any time during menstrual cycle 6-8 . Age specific AMH levels have been found to be better predictor of oocyte yield than FSH women aged between 34 and 42 years 9 . Overall AMH has been found to be the most sensitive predictor of over and under response to controlled ovarian stimualtion 10 . Thus, the aim of this study was to measure the values of various hormonal and ultrasonographic markers in an infertile woman. Furthermore, determining the relation and the strength of correlation of the various variables to the Antral Follicle Count. Materials and Methods A comparative observational and cross-sectional study was conducted in the IVF centre, department of Obstetrics and Gynaecology, National Institute of Medical Science and Research, Jaipur on 120 infertile female aged 18-43 years between July 2015-July 2016. All the patients fulfilled the inclusion criteria like regular menstrual cycles of 21-35 days, no current or past diseases, not underwent any hormonal treatment, body mass index (BMI): 18-27 kg/m 2 and no evidence of any endocrine disorders. Patients with any endocrine disorders, abnormal liver functions, abnormal kidney functions and genital tuberculosis were excluded. After taking an informed consent and obtaining a detailed history, physical examination and laboratory examination, patients were subjected to the measurement of day 3 serum values of serum FSH, LH, E2, Inhibin B, AMH, AFC and ovarian volume. After compiling the data, statistical analysis was done, and results were computed and evaluated. Adequate venous blood samples were obtained from the subjects for the measurement of Serum FSH, LH, E2, Inhibin B and AMH on the day 3 of menses at around 10.11 hrs. Serum FSH, LH, E2 levels were measured by solid phase, two sites chemiluminescent immunometric assay, reagent kit being Abbott/ Siemens. Instrument used was Architect/ Avida Centaur. Inhibin B levels were measured by Enzyme linked Immunosorbent Assay (ELISA) method. Serum AMH levels were measured in human serum by two sites chemiluminescent immunometric assay, reagent kit being of BECKMAN COUTLER, Instrument used was BECKMAN COUTLER for in vitro quantitative measurement. Antral follicle count and Ovarian volume was measuredby ultrasonography. Follicles measuring 2-10mm were measured by scanning from outer to inner margins of ovaries individually. All the scan was done by a single operator on Voluson E8 (GE Health care), with 5-9 MHz transvaginal volume probe on day 3 of menses. The addition of the follicles measured in both the ovaries was obtained. The sum was designated as the "Antral Follicle Count". The measurement of ovarian volume is done along with the measurement of the AFC. The diameter of the ovarian contour is measured in 3 perpendicular direction as D1, D2, D3 and the ovarian volume of individuals ovaries is calculated by using the formula D1 x D2 x D3 x 0.52. The ovarian volume of the right and the left ovary was summed up to obtain the total ovarian volume. Statistical analysi The cases were divided in four groups on the basis of age as <20 as group 1, group 2 as 21-30, group 3 as 31-40, group 4>40 and also on basis of AFC as <4, 4-7, 8-12, >12. After compiling the data obtained in a tabulated form, the values of the individual variables were compared and the correlation between the various serum values i.e. serum AMH, FSH, LH, E2, Inhibin B with the antral follicle count and the ovarian volume was found out. The correlation was evaluated for positive and negative. The correlation as termed significant only if the obtained P value was <0.05 and the rest were termed as not significant. Thus, subsequently the results were obtained. Correlation between Serum AMH & AFC There exists positive correlation between Serum AMH and AFC. Conclusion Our study concludes that Serum AMH values can be considered for evaluation of ovarian reserve. Serum AMH levels are comparable to AFC values. The results thus concluded in making this marker being useful. Outcome of the present study showed that the combined use of AMH and Ultrasonographic marker in screening the current status of ovarian function in general sub-fertile population as it has a role in the process of initial and cyclic recruitment. It can be used to identify those patients who are destined to fail induction and ART programs without incurring the financial burden of the interdependent several serum hormonal markers and ART programs. Serum AMH values with the ultrasonographic markers like AFC can modestly increase the predictability of diminished ovarian reserve (DOR) without increasing the cost of the screening process. An AMH test helps women to beat the biological clock by predicting how long they left to achieve motherhood with its relation to age and may also reduce the need for ART in these patients.
2019-06-13T13:22:32.826Z
2019-03-15T00:00:00.000
{ "year": 2019, "sha1": "c6a6da5831c8240ac79b891ee5132a3d6845d403", "oa_license": null, "oa_url": "https://doi.org/10.18535/jmscr/v7i3.111", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "12aaef9cf8f41d78d811a2c785b1235e455e808a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
37407086
pes2o/s2orc
v3-fos-license
Models of Hybrid Springs for Ergonomic Seats and Mattresses An ergonomic seat or mattress has to provide optimal and uniform support for the body of a man or a woman with similar or slightly differing size centiles. It is a common opinion that spring systems in upholstered furniture are much more durable and more user-friendly than foam systems. The aim of this study was to develop a new construction of an upholstery spring of bilinear stiffness. On the basis of the conducted studies and analysis of their results, it was shown that traditional bonnell and barrel springs exhibit linear stiffness within the range of defl ections of up to 70 % of their initial pitch. New spring designs change stiffness already at defl ections of 34% of their pitch. Thanks to this fact they may be used in designs of seats and mattresses for individual users, patients or disabled individuals. INTRODUCTION 1. UVOD Functional cushions and mattresses of upholstered furniture should provide the highest comfort of use, since they belong to the group of furniture, with which users have direct contact 14 to 18 hours a day.Thus an ergonomic seat or mattress has to provide optimal and uniform support for the body of a man or a woman with similar or slightly differing size centiles.Among the comfort-quantifying parameters of seats (and analogously also those of mattresses), the ones most frequently cited are: the average human/seat contact pressure, the maximum human/seat contact pressure, the human/ seat contact-area size and the extent of symmetry of the human/seat contact-area ( 2000; Yun et al., 1992;Zhao et al., 1994).This comfort depends on the type of material used in the manufacture of seats.It is commonly and justly believed that spring systems in upholstered furniture are much more durable and more user-friendly in use than foam systems.However, a majority of literature items are research studies discussing mainly problems of selection, modeling and analyses of stiffness of foam systems in seats of upholstered furniture (Chow and Odell, 1994 Scarpa, 2007Scarpa, , 2009; Brandel and Lakes, 2001;Choi and Lakes, 1992;Lakes, 1987Lakes, , 1992;;Prtre et al., 2006;Scarpa et al., 2004;Webber, 2008).It was repeatedly stressed in those studies that high comfort of use in case of seats and mattresses is ensured thanks to the application of materials with non-linear, progressive stiffness characteristics.Such characteristics result in a situation when, at slight loads, seats are very soft, they are subjected to considerable defl ections and, when in contact with the body of the user, generate slight contact stresses.At increasing loads, the same seats increase their stiffness due to unproportional reduction of defl ections.At the same time they equalize mean contact stress at the contact zone with the user's body (Chu, 2000;Smardzewski, 2009;Smardzewski et al., 2005Smardzewski et al., , 2006; Smardzewski and Grbac, 1998;Smardzewski and Matwiej, 2007;Verver et al., 2004;Wang and Lakes, 2004;Wiaderek and Smardzewski, 2008).However, there is a lack of a wider discussion on the necessity of changes in shapes and stiffness characteristics of springs or spring panels.Several papers were devoted to this problem, indicating the need to replace springs of linear stiffness characteristics with progressive springs (Dzięgielewski and Smardzewski, 1995;Kapica and Smardzewski, 1994;Smardzewski, 1993aSmardzewski, ,b,c, 2006Smardzewski, , 2008aSmardzewski, ,b, 2009;;Smardzewski and Matwiej, 2006).These changes are required due to anthropotechnical aspects, imposing the necessity to take into consideration individual needs of healthy and disabled persons (Ambrose, 2004;Smardzewski, 2009;Smardzewski et al., 2005Smardzewski et al., , 2010a;;Vlaovic et al., 2008;Winkler, 2005). The aim of this study was to develop virtual (numerical) models of new construction of an upholstery spring of bilinear stiffness to be used in seats and mattresses for users of different weight, height and physique.Such springs should be characterized by slight stiffness at small loads and high stiffness at large loads. METODE I MATERIJALI In traditional mattresses, sofas or armchairs, panels composed of bonnell biconical springs or barrel springs placed in autonomic pockets are commonly used.They differ in pitch, diameters of inner and outer coils, the number and pitch of coils as well as the diameter of wire from which they were manufactured (Fig. 1).Prior to the design process of new spring designs, it was decided to determine stiffness characteristics of springs presented in Figure 1.Springs were manufactured by a Polish mattress producer.A batch of 10 springs was selected from each type of springs for the uniaxial compression test.Testing was performed on a Zwick 1445 testing machine at 100 mm/min.load velocity.During the tests load forces were recorded accurate to 0.01 N and defl ections were recorded accurate to 0.01 mm.Loading was interrupted at the defl ection of 80 mm in case of bonnell springs and 150 mm for barrel springs.On the basis of evaluation of the recorded compression characteristics, bonnell springs were selected for further tests.It was also decided to verify how contact stress distribution changes at the contact of a child's body and the body of an adult with a mattress made from the above mentioned bonnell springs. Using bonnell springs with the geometry presented in Figure 1a, a mattress of 150 x 900 x 2000 mm was manufactured in a Polish mattress factory.The mattress contained a spring core of 10 x 24 springs.The spring core was covered on both sides with felt of 5 mm in thickness and polyurethane foam T2535 of 20 mm in thickness, density of 25 kg/m 3 and stiffness of 3.5 kPa.The upholstery summer-winter layer on the summer side was made from an upholstery fabric (38 % polyester, 25 % cotton, 37 % polypropylene) quilted with cotton at 200 g/m 2 , while on the winter side it was made from an upholstery fabric (46 % polyester, 54 % polypropylene) quilted with natural sheep wool at 250 g/m 2 .An FSA Bed sensor mat by Vista Medical, Ltd. was placed on the surface of the summer side of the mattress (sensing area 762 x 1920 mm, poly thickness 4 mm, sensor dimensions 20.5 x 57.2 mm, sensor gap 3.4 x 3.1 mm, sensor arrangement 32 x 32, standard calibration range 13.3 kPa).It was calibrated prior to measurements and then the mat was coupled with a computer ).Load on the mattress was generated by a 3-year old boy of 14.5 kg and 985 mm and a 32-year old man of 76 kg and 1680 mm, both in a lateral recumbent position.Stresses were recorded for each user three times for 5 minutes with a frequency of 10 Hz and accurate to 0.133 kPa.From each measurement a total of 3 000 recordings were registered.For further analyses the recordings were selected, in which values of maximum stresses changed by max. 10 % in fi ve hundred successive recordings.Taking into consideration the obtained distribution and maximum values of contact stresses on the basis of the discussion of results, new designs of biconical springs were developed. First, it was decided to design a structure and next to numerically verify stiffness of hybrid springs for seats with non-linear compression characteristics.It was assumed that the designed spring should be composed of two parallel linked elementary springs (Fig. 2).Then, compression characteristics of the whole system would also be composed of two segments, one presenting defl ection of the element with stiffness k 1 to the moment when coils of the spring with stiffness k 2 settle on the foundation, and the other segment illustrating joint stiffness of both springs with stiffness Traditional parallel systems of cylindrical or barrel springs are of little practical value both from the economic and practical point of view.In such a situation the weight of the box spring and the amount of labor required for its assembly increase.Thus an effective solution needs to be searched for by modeling the stiffness of one-piece hybrid conical springs.Progressive stiffness of the designed springs should provide high softness of the system at mild surface loads and considerable stiffness at the action of concentrated forces or forces of high intensity.For the described operation, conditions the minimum structural requirement is a spring composed of two outer cones with low stiffness and an inner spring (cylindrical or conical) of higher stiffness.The analytical solution for stiffness of conical springs was given for the fi rst time by Timoshenko (Timoshenko and Young, 1962).However, that model may be applied for springs with a small pitch, where the coil angle and distances between coils are slight.In case of bonnell springs, there are large pitches, considerable coil angles and large distances between coils.The calculation method for such springs was presented in studies by Smardzewski (1993Smardzewski ( ,a,b,c, 2006Smardzewski ( , 2008a,b),b), where stiffness of one spring cone is calculated from the equation: Where: R 1 -the biggest coil radius / najveći polumjer namotaja, R 2 -the smallest coil radius / najmanji polumjer namotaja, α -the angle of the coil spring / kut namotaja opruge, n -number of spring coils / broj namotaja opruge, G -Kirchhoff modulus / Kirchhoffov modul, r -spring wire radius / polumjer žice opruge, m -coeffi cient dependent on spring stiffness / koefi cijent ovisan o krutosti opruge. The above mentioned studies also presented the consistency of analytical solutions with the results of numerical calculations.Despite the fact that the developed mathematical formulas adequately describe the stiffness of modeled structures, from the point of view of practice in design offi ces, it may be diffi cult to apply.Thus, it was decided to conduct virtual modeling of spring shape in the Autodesk® Inventor® Professional 2011 environment with the use of computer-integrated CAD/CAE applications, well-known and commonly available to engineers.It was assumed that the dimensions of outer spring cones would be close to the dimensions of spring cones from Figure 1a, at: 4 MPa, 1.4 ≤ 2r ≤ 2.2 mm.On this basis, three models of hybrid springs to be used in a furniture seat were prepared with the dimensions as in Figure 3. The models are characterized by an identical shape of the outer biconical spring.The difference consists in the shape of the 4-coil inner spring.Model A has an inner spring with coils expanding downwards at a 3° angle.Model B contains a cylindrical spring with an identical diameter of inner coils, while model C comprises a conical spring, which coils narrow downwards at an angle of 5°.A constant distance of 34.4 mm was maintained between the base of the biconical spring and the base of each of the inner springs.This distance results from the adopted assumption that, in case of seat springs within this range of defl ections, the stiffness of spring k 1 ≤ 0.3 N/mm.This means that defl ections amounting to approx.1/3 pitch of the spring should be caused by loads of max. 10 N. Each of the designs (a total of 9) was recorded in an STP fi le and as solids they were imported to the Au-todesk® Algor® Professional 2011 system, performing calculations using the fi nite element method.Next, appropriate meshes were generated, composed of 20-node solid elements with mesh size of max.0.5 mm (Fig. 4).Nodes of coils constituting the base of the biconical spring were steadily supported and placed on the Impact Plane surface.This surface made it possible to provide a comprehensive analysis of contact with all surfaces of spring coils.Compression loads were exerted by a stiff The next step in the design process was to develop a spring structure with a non-linear compression characteristic, to be used in mattresses.Here the assumed spring stiffness was k 1 ≤ 0.2N/mm.This means that defl ection of approx.1/2 pitch of a biconical spring should be caused by loads of max. 10 N. In view of the above recommendations, four successive spring designs were prepared with geometry and dimensions as in Figure 5. Wire with a diameter of 1.4 and 1.6 mm was used in the models.Model D is characterized by an identical shape of the outer biconical spring as that in models A, B and C. The difference consisted in the shape of the inner spring.This spring was composed of three coils expanding downwards at an angle of 3°.In model E, the geometry of the outer biconical spring was changed.The coil angle was increased from 70° to 75°, while the coil angle of the inner spring changed from 3° to 6°.Moreover, the distance between the base of the biconical spring and the base of each of the inner springs was increased to 50 mm.This distance was to provide greater free settlement of coils in the biconical spring than in the previous models.The method of preparation for numerical calculation in models D and E was identical as in case of models A, B and C. The wire material was spring steel with the modulus of elasticity E = 9⋅10 4 MPa.The task was defi ned as nonlinear taking into consideration considerable displacements, realizing 100 iterations within 1 second.As a result of calculation, stiffness characteristics were obtained for individual hybrid springs, and they were presented in the form of the dependence force = f(dis placement). Uniaxial compression of conical and barrel springs indicates a completely different character of work for each of these springs.Figure 6 presents forms of deformation for springs compressed to 50 % and 25 % of their initial pitch.As it can be seen in Figure 6a, coils of bonnell springs defectively settle onto the base within the range of defl ections of approx.¾ of spring pitch.Only when this boundary was exceeded, the fi rst active coil settled on the base and it was only in one point Q.In case of a barrel spring (Fig. 6b), the gradual settlement of coils occurs only at a defl ection exceeding 75 % of spring pitch. Settlement of coils at such a considerable compression of springs results in a situation when their range of progression is practically shifted outside the range of design work.Figure 7 presents mean stiffness characteristics of bonnell and barrel springs determined in the uniaxial compression test.It can be clearly seen from this fi gure that within the range of displacements of up to 70 mm, a bonnell spring exhibits a linear dependence between force and defl ection.Only when this value is exceeded, spring stiffness increases markedly.It is caused by the above mentioned settlement of coils and its support on the base.In contrast, a barrel spring exhibits a much lower, but instead completely linear stiffness characteristic. Linearity of the design load section of compression characteristics in both springs results in a situation when the same mattress made from identical components and with identical dimensions may be too hard for a child and too soft for an adult.This is indicated by the results of measurements shown in Figure 8.It can be seen that the greatest contact stresses at the contact surface of the boy's body and the mattress concentrate at the head, the shoulder and the pelvis (Fig. 8a).At the site of support for the shoulder its maximum value was 8.79 kPa, while at the support site for the pelvis it was 9.86 kPa.At the body/mattress contact surface in case of the man, the greatest stresses were recorded at the height of the shoulder and head amounting to 12.13 kPa, while at the support site for the pelvis, stresses amounted to 6.79 kPa.Guttmann (1946), Husain (1953) and Kosiak (1959Kosiak ( , 1961) ) reported a relationship between the amount of pressure, duration of application and the development of tissue damage in canine and rat experiments.Kosiak (1959Kosiak ( , 1961) stated that microscopic pathological changes were noted in tissues subjected to as little as 8 kPa for only one hour, although no changes were recorded in the animals that were subjected to pressures of 4. Wang et al., 2004).On the basis of the presented literature data and those from Figure 8, it can be seen that the selected mattress was too hard for the user of a small weight, having a small body/ mattress contact surface area.Also the man's body was inadequately supported.A too high disproportion of stresses in the shoulder and pelvis areas indicates that When comparing stiffness of springs modeled from wire with a diameter of 1.6 mm (Fig. 9), it can be seen that within the range of displacements from 0 to 34.4 mm only the biconical spring works.Its stiffness in this range of displacements overlaps with stiffness of the barrel spring.From the moment of contact of the fi rst coils of the inner spring with the base stiffness, the whole system increases markedly.Here, the smallest slope of the curve was observed for model A, while it was the greatest for model C.This means that above a load of 7 N, defl ections of the spring type A increase faster than those of spring type B, while defl ections of spring type B increment faster than defl ections of spring C. The two-rate nonlinear stiffness of springs results in a situation when, at a load of 20 N, a traditional bonnell spring defl ects by 37 mm, spring type A by 48 mm, spring B by 45 mm, while spring C by 41 mm, respectively.An increase in load up to 40 N causes the bonnell spring to continue to defl ect linearly to the value of 74 mm, while spring A to 69 mm, spring B to 60 mm and spring C to 52 mm.In order to better describe differences in stiffness of springs, two stiffness coeffi cients were defi ned.Coeffi cient k 1 =(F 7N - F 0N )/(λ 7N -λ 0N ) (N/mm) for the fi rst linear segment of the characteristic with a maximum load of 7 N and k 3 =(F 40N -F 10N )/(λ 40N -λ 10N ) (N/mm) for the second segment, illustrating stiffness resultant from the outer biconical spring and the inner spring, with the maximum load of 40 N.As it can be seen from Figure 9, stiffness of the barrel spring is constant, amounting to k 1 = 0.11 N/mm, while it is k 1 = 0.55 N/mm for the bonnell spring.Still stiffness of all models of hybrid springs in the fi rst range is 0.19 < k 1(A,B,C) < 0.20 N/mm.In the second range of defl ections, stiffness depends on the shape of the inner spring.For spring type A, the stiffness index k 3A is 0.99 N/mm, for spring B k 3B =1.36 N/ mm, and for spring type C k 3C =1.87 N/mm.Stiffness of springs modeled from wire with a diameter of 1.8 mm and 2.2 mm is presented in Figures 10 and 11.Also in these cases similar qualitative differences can be observed in stiffness of individual spring models.It is also of interest that their stiffness in the fi rst range of displacements increased slightly.For models made from wire with a diameter of 1.8 mm, this stiffness falls within the range of 0.22 <k 1(A,B,C) < 0.23 N/ mm, while for models made from wire with a diameter of 2.2 mm, it is 0.23 < k 1(A,B,C) <0.32 N/mm.Considerably greater quantitative differences are found in the second range of displacements, during the simultane- ous compression of coils in the biconical and inner springs.As it can be seen in Figure 10, the stiffness index for individual springs is as follows: k 3A =2.01, k 3B =3.04 and k 3C =4.11 N/mm.In turn, for springs made from wire with a diameter of 2.2 mm (Fig. 11), stiffness indexes for individual spring types are ordered as follows: k 3A =2.63, k 3B =4.57, k 3C =6.26 N/mm. Taking into consideration the results of numerical calculations, it is obvious that the new models of upholstery springs in terms of their performance characteristics are much more attractive in comparison to bonnel or barrel (encased) springs used so far.A signifi cant advantage of these springs is connected with their two-rate stiffness guaranteeing a greater potential for free modeling of stiffness in seats of armchairs depending on the needs of their users.Considering their anthropometric properties, particularly weight and dimensions of the body, a seat optimally supporting the body in the sitting position should be designed.Among the nine analyzed variants of springs, model A made from wire with a diameter of 1.6, 1.8 and 2.2 mm turned out to be the most practical structural solution.Advisability of this selection is confi rmed by Figure 12.It can be seen from this fi gure that, when applying different wire diameters, the stiffness of springs in the initial range of coil settlement may easily be changed from 0.19 N/mm to 0.32 N/mm.In the second range of the experiment parameters, their stiffness increases from 0.99 N/mm to 2.63 N/mm.Thus, in terms of such a spectrum of spring stiffness, a structural solution may be selected, aimed at providing comfort and functionality to seating and resting furniture to a child, a woman or to a man.Due to their linear stiffness characteristics (Fig. 12), bonnell or barrel springs do not ensure such extensive possibilities.Moreover, new designs were also developed for spring type D and E to be used in mattresses.Their nonlinear stiffness characteristics are presented in Figure 13, including characteristics of spring type D made from wire with a diameter of 1.6 mm and two characteristics of spring type E made from wire with a diameter of 1.4 and 1.6 mm. We can see from Fig. 13 that the settlement of coils in the inner spring starts after the spring is compressed by approx.50 mm.Here, depending on the spring model and wire diameter, the value of force causing this settlement varies.For spring type D this force was 10 N, while for spring type E 7 N or 8 N, depending on the diameter, it was made from i.e. 1.4 mm vs. 1.6 mm.In the fi rst range of the spring operation, their stiffness was k 1D =0.19 N/mm, 0.14<k 1E <0.15 N/mm.It is obviously markedly lower than the stiffness of barrel springs, which is particularly signifi cant in modeling of mattresses to be used by children and women.In the second stage of coil settlement, stiffness resultant from individual springs increased markedly.For spring type E it was 1.85 N/mm, while for spring type D made from wire with a diameter of 1.6 mm k 3E =1.28 N/mm.In case of spring type E made from wire with a diameter of 1.4 mm k 3E =1.08 N/mm.Moreover, this spring has an additional advantage.As it can be seen from Figure 13, before complete compression, it exhibits a three-rate stiffness characteristic.In the third range of work its stiffness is 4.18 N/mm. The presented models of hybrid springs for mattresses differ from hybrid springs for seats in terms of the following characteristics: they exhibit a greater range of free coil settlement of outer springs, lower stiffness in the fi rst and second range of work during uniaxial compression and in some cases three-stage stiffness characteristics.These advantages perfectly match the modeling of ergonomic mattresses adapted to individual needs of users.It is crucial particularly when the bed should be designed for use in boarding houses, students' hostels, hotels, hospitals, etc., where many individuals of different physique use the same mattress. ZAKLJUČCI On the basis of conducted tests and analyses of their results, the following conclusions and observations may be made: Traditional bonnell or barrel springs exhibit linear stiffness over a considerable range of design displacements of up to 70 mm, Tested mattresses made from bonnell springs by contact with the user's body cause disadvantageous pressures exceeding 8 kPa, Developed structures of hybrid springs are charac-terized by an advantageous nonlinear stiffness, In the design of seats, we may recommend springs model A made from wire with a diameter from 1.6 mm to 2.2 mm, while for mattresses we recommend springs type E made from wire with a diameter from 1.4 to 1.6 mm. Figure 4 Figure 3 Figure 4 Mesh model of spring Slika 4. Mrežasti model opruge 7 kPa for periods up to four hours.According to Hostensa et al. (2001), Landis (1930), Takahashi et al. (2010) pressures lower than 2.7 to 4 kPa are required to prevent capillary occlusion and pressure discomfort due to prolonged sitting.Reswick and Rogers (1976) developed a relationship between the maximum pressure being experienced by the supporting tissues and the time over which that maximum pressure was applied.If the pressure/time index fell above the curve, subjects exhibited pressure sore histories.If it fell in the acceptable zone, they did not show pressure problems.This classic study has served as the basis for clinical management practices until this day.Loads of 4 -8 kPa, as a criterion for comfort, were applied in many studies on designs of seats and mattresses (Brienza et al., 1996; Butcher and Thompson, 2009, 2010; Gross et al., 1994; Hamanami et al., 2004; Sacks, 1989; Seigler and Ahmadian, 2003; Smardzewski, 2009; Smardzewski et al., 2010a,b; Tewari and Prasad, 2000; Figure 8 Figure 8 Distribution of contact stresses on the contact surface of the user's body with the mattress: a) a child, b) a man Slika 8. Raspodjela kontaktnih naprezanja na dodirnoj površini korisnikova tijela i madraca: a) dijete, b) muškarac Figure 13 13 . Figure 13 Stiffness of hybrid upholstery springs type D and E Slika 13.Krutost hibridnih opruga za ojastučeni namještaj tipa D i E
2017-05-03T02:05:37.751Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "fe24c8b57e205fa53de9eff20ea91a766b76efea", "oa_license": "CCBY", "oa_url": "https://hrcak.srce.hr/file/144814", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fe24c8b57e205fa53de9eff20ea91a766b76efea", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
267373345
pes2o/s2orc
v3-fos-license
MACC1 Regulates LGR5 to Promote Cancer Stem Cell Properties in Colorectal Cancer Simple Summary We discovered a novel association of cancer and stemness. In particular, we demonstrate an MACC1—LGR5 link by transcriptional regulation of the crucial stemness gene LGR5 by MACC1, the inducer of tumor initiation, progression and metastasis. We show this regulation by using 2D and 3D cell culture models, in CRC-derived PDX mouse models and in human CRC patient samples. This study indicates that the metastasis inducer MACC1 acts not only as a cancer stem cell-associated marker, but also as a regulator of LGR5 expression and LGR5-mediated stem cell properties. Thus, interventional approaches targeting MACC1 would potentially improve further targeted therapies for CRC patients to eradicate CSCs and prevent cancer recurrence and distant metastasis formation. Abstract Colorectal cancer (CRC) is one of the leading causes of cancer-related deaths worldwide. The high mortality is directly associated with metastatic disease, which is thought to be initiated by colon cancer stem cells, according to the cancer stem cell (CSC) model. Consequently, early identification of those patients who are at high risk for metastasis is crucial for improved treatment and patient outcomes. Metastasis-associated in colon cancer 1 (MACC1) is a novel prognostic biomarker for tumor progression and metastasis formation independent of tumor stage. We previously showed an involvement of MACC1 in cancer stemness in the mouse intestine of our MACC1 transgenic mouse models. However, the expression of MACC1 in human CSCs and possible implications remain elusive. Here, we explored the molecular mechanisms by which MACC1 regulates stemness and the CSC-associated invasive phenotype based on patient-derived tumor organoids (PDOs), patient-derived xenografts (PDXs) and human CRC cell lines. We showed that CD44-enriched CSCs from PDO models express significantly higher levels of MACC1 and LGR5 and display higher tumorigenicity in immunocompromised mice. Similarly, RNA sequencing performed on PDO and PDX models demonstrated significantly increased MACC1 expression in ALDH1(+) CSCs, highlighting its involvement in cancer stemness. We further showed the correlation of MACC1 with the CSC markers CD44, NANOG and LGR5 in PDO models as well as established cell lines. Additionally, MACC1 increased stem cell gene expression, clonogenicity and sphere formation. Strikingly, we showed that MACC1 binds as a transcription factor to the LGR5 gene promoter, uncovering the long-known CSC marker LGR5 as a novel essential signaling mediator employed by MACC1 to induce CSC-like properties in human CRC patients. Our in vitro findings were further substantiated by a significant positive correlation of MACC1 with LGR5 in CRC cell lines as well as CRC patient tumors. Taken together, this study indicates that the metastasis inducer MACC1 acts as a cancer stem cell-associated marker. Interventional approaches targeting MACC1 would potentially improve further targeted therapies for colorectal cancer patients to eradicate CSCs and prevent cancer recurrence and distant metastasis formation. Introduction Colorectal cancer (CRC) has a very high prevalence throughout the world.It is rated as the third most common cancer in males and the second in females.The high mortality of CRC is directly associated with metastatic disease.Therefore, it is necessary to discover molecular biomarkers for early diagnosis of tumors with high metastatic potential [1]. The metastasis-associated in colon cancer 1 (MACC1) gene was first identified by a genome-wide analysis of genes which are differentially expressed in human colon cancer tissues, metastases and normal tissues [2].It is located on the human chromosome 7 (7p21.1)and encodes a protein of 852 amino acids.In contrast to normal colon mucosa, MACC1 displays highly elevated expression levels in primary and metastasized colon cancer tissues.It shows the highest expression in tumors or blood of those not yet distantly metastasized patients but who will metachronously metastasize or those patients who have already developed distant metastases.Additionally, the 5-year survival rate of patients with high MACC1 expression in their primary tumors is decreased from 80% to 15% [2,3]. MACC1 has also been shown to induce crucial metastasis-associated phenotypes, such as migration, invasion, cell dissemination, wound healing and proliferation in cell cultures.In xenografted, patient-derived and transgenic mouse models, MACC1 induces tumor initiation and progression, as well as liver and lung metastases [2,4].We generated MACC1 transgenic mice and crossed them with adenoma-initiating APC min animals.When we observed the transition to malignant carcinomas in vil-MACC1/APC min mice, we found by transcriptomics of vil-MACC1/APC min vs.APC min mice a tremendous expression of stemness genes, such as Nanog (direct induction) and Oct4 (indirect induction), by MACC1 [4].The clinical importance of MACC1 for metastasis prognostication, prediction and treatment planning has meanwhile been confirmed for more than 20 solid tumor types; certainly, this has been repeatedly shown for CRC [3]. Accumulating research in the last decades underlines the long-lasting hypothesis that human cancers can be considered as stem cell diseases, supporting that solid tumorigenesis, progression and chemoresistance/radiochemoresistance are initiated by a small population of cancer cells.According to the cancer stem cell (CSC) model, features of these small fractions of cancer cells are self-renewal and pluripotency.Tissues such as the intestinal epithelium continuously self-renew by the replication of this particular set of adult stem cells [5].CSCs are also thought to be responsible for recurrence and metastasis [6].Intestinal crypt base stem cells, from which CSCs derive, are located at the bottom of the crypt structures, and several molecules are used for their identification, including CD44, CD166, NANOG, Oct4, ALDH1 and LGR5 [6][7][8]. Leucine-rich repeat G-protein coupled receptor 5 (LGR5) was first identified by Hsu et al. in 1998 [9].The human LGR5 gene is located at chromosome 12 at position 12q22-q23 and contains (like rat and mouse homologs) 907 amino acids and seven transmembrane domains.LGR5 protein is expressed in crypt-base populations which are able to develop into all differentiated lineages within the intestine [10].LGR5 is a Wnt target gene and indicates dividing intestinal stem cells.Also, LGR5 is reported as a biomarker for the identification of stem cells in the small intestine and colon [11][12][13]. However, MACC1 expression in human colon CSCs and possible implications for stemness still remain elusive.Here, we report for the first time that the metastasis inducer MACC1 transcriptionally regulates LGR5 expression to promote cancer stem cell properties in CRC by using 2D and 3D cell culture models, CRC-PDX mouse models as well as human CRC patient samples. Cell Culture and Organoids The human colorectal cancer cell lines SW480 (ATCC #CCL-228), HCT116 (ATCC #CCL-247) and SW620 (ATCC #CCL-227) with low, moderate and high endogenous MACC1 expression were cultured in DMEM medium supplemented with 10% FCS.The cells were transduced for forced MACC1 expression or MACC1 knockdown, and MACC1 knockout cells were generated as described in [14]. PDX Generation and the Tumor Dissociation Procedure PDX tumor models were generated as described by Schütte et al. [15], and the use of patient material for PDX was approved by the local Institutional Review Board of Charité-Universitätsmedizin (Charité Ethics; EA 1/069/11) and the ethics committee of the Medical University of Graz (Ethics Commission of the Medical University of Graz, Auenbruggerplatz 2, 8036 Graz, Austria) and was confirmed by the ethics committee of the St John of God Hospital Graz (23-015 ex 10/11). For all in vivo experiments, the welfare of the animals was maintained in accordance with the general principles governing the use of animals in experiments of the European Communities and German legislation.The study was performed in accordance with the United Kingdom Coordinating Committee on Cancer Research (UKCCCR) regulations for the Welfare of Animals and with the German Animal Protection Law and was approved by the local responsible authorities, Berlin, Germany (issued to the EPO GmbH Berlin by the State Office of Health and Social Affairs, Berlin, Germany; approval no.G 0333/18). For PDX dissociation, the PDX samples were freshly obtained from mice, washed in PBS and mechanically dissected into small pieces, which were added to a freshly prepared enzyme mix containing 2 mg/mL collagenase I, 2 mg/mL collagenase II, 640 µg/mL dispase and 500 Units DNase I in 5 mL of RPMI without FCS.An enzyme mix containing dissected tumor pieces was transferred to a gentleMACS C-tube and applied to the gen-tleMACS program m_impTumor_03.01.Then, the C-tubes were placed on the gentleMACS rotator in a 37 • C incubator for 30 min.After tumor digestion, the mixes were filtered twice through cell strainers, first with a 100 µm strainer and then with a 70 µm strainer.The filtrates containing single cells were centrifuged at 400× g for 5 min at room temperature to remove enzymes.A quantity of 1 mL of erythrocyte lysis buffer was added to the cell pellet for 10 min at room temperature in the dark to lyse the mouse erythrocytes.After incubation, 5 mL PBS was added and centrifuged at 550× g for 5 min at room temperature to remove the lysis buffer.The pellet was washed with PBS and the cell number was counted for the subsequent experiments. CD44-APC Tagging and Flow Cytometry Cell Sorting (FACS) Analysis CD44-APC Mouse Anti-Human CD44 antibody (Clone G44-26, BD Pharmingen™, Heidelberg, Germany) and APC Mouse IgG2b κ Isotype Control (Clone 27-35, BD Pharmin-gen™) were added to a final dilution of 1:10 to PBS-washed cells.The concentration of cells was adjusted to a range from 1 × 10 6 to 1 × 10 7 cells per 100 µL PBS before adding the antibodies.The suspension was incubated for 45 min at 4 • C in the dark.After incubation, the cells were washed twice with PBS by repeating centrifugation at 300× g for 8 min and adding PBS.The cells were suspended in 500 µL PBS for flow cytometry cell sorting (FACS) analysis. Cells tagged with the CD44-APC antibody were FACS sorted.The cell suspensions were filtered first to obtain single cells for FACS.Each FACS analysis was performed by gating for single cells, then for living cells via DAPI labeling.Then, the cells were sterile filtered through a 20 µm sieve for the collection of unsorted, CD44-low and CD44-high populations directly into PBS for the subsequent experiments. Aldefluor Assay Organoids and xenografts were processed to single cells and labeled using the Aldefluor assay according to the manufacturer's (Stemcell Technologies, Cologne, Germany) instructions.The cells were sorted by FACS (BD FACS Aria II) for ALDH activity and DAPI to exclude dead cells. RNA Sequencing The cells were lysed in RLT buffer and processed for RNA using the RNeasy Mini Plus RNA extraction kit (Qiagen, Hilden, Germany).The samples were processed using Illumina's TruSeq RNA protocol and sequenced on an Illumina HiSeq 2500 machine as 2 × 125 nt paired-end reads.The raw data in Fastq format were checked for sample quality using our internal NGS QC pipeline.Reads were mapped to the human reference genome (assembly hg19) using the STAR aligner (version 2.4.2a)[16].Total read counts per gene were computed using the program "featureCounts" (version 1.4.6-p2) in the "subread" package, with the gene annotation taken from Gencode (version 19).The "DESeq2" Bioconductor package [17] was used for the differential expression analysis [18,19].Array data are available in the ArrayExpress database (www.ebi.ac.uk/arrayexpress, assessed on 21 January 2024) under accession numbers ArrayExpress: E-MTAB-5209 and ArrayExpress: E-MTAB-8927. In Vivo Tumorigenicity Assay FACS-sorted PDO-derived cell populations of 1 × 10 3 cells/animal were transplanted in matrigel subcutaneously (s.c.) into the left flank of 6-8-week-old anesthetized female NMRI nu/nu mice (n = 3 mice/sorted population).The mice were observed for a maximum of 105 days and maintained under sterile and controlled conditions (22 • C, 50% relative humidity).Tumor growth was measured in two dimensions with a caliper.Tumor volumes (TV in cm 3 ) were determined by the formula: TV = (width 2 × length) × 0.5.The work conducted with living animals (at EPO GmbH Berlin, Germany) was approved by local authorities (Landesamt für Gesundheit und Soziales, LaGeSo Berlin, Germany) under approval number H0023-09. Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR) The cell culture RNA purification protocol from the GeneMATRIX Universal RNA Purification Kit (Roboklon, Berlin, Germany) was performed and the concentration of RNA was measured by using the NanoDrop™ 2000/2000c Spectrophotometer (Thermo Fisher, Darmstadt, Germany).For each sample, 50 ng of total RNA was reverse transcribed.Reverse transcription was performed with random hexamers in 5 mM MgCl 2 , 1 × RT buffer, 250 µM pooled dNTPs, 1 U/µL RNase inhibitor and 2.5 U/µL MuLV reverse transcriptase.The reaction was run at 42 • C for 15 min and 99 • C for 5 min, with subsequent cooling at 5 • C for 5 min. Tumor Sphere Formation Assay Cells were counted and diluted in the corresponding medium to a concentration of 0.5 cells/µL (1 × 10 2 cells in 200 µL).Edge wells in ultra-low-attachment 6-well plates were filled with PBS to minimize medium loss via evaporation.A quantity of 200 µL of cell suspension was added to each well and the plates were sealed with Parafilm to avoid evaporation of the medium.The plates were incubated at 37 • C for 7 days.The number of spheres formed was counted using 40× magnification.The results were represented as a percentage of the number of tumor cell spheres present divided by the initial number of cells seeded.While counting, only the solid circular spheres were counted, excluding aggregated cells. Clonogenic Formation Assay A total of 5 × 10 2 cells were plated per well in 6-well plates in the corresponding medium and placed in the incubator at 37 • C for 7 days (except SW480 cells, as the colony formation required 14 days for proper colony formation and analysis).After the respective incubation times, the media were aspirated, and 800 µL of crystal violet/formaldehyde mix was added to each well and incubated for 40 min at room temperature.After removal of the crystal violet/formaldehyde mix, the plates were washed in water 3 times by submerging the plates for complete removal of residual dye.The plates were placed upside-down overnight for complete evaporation of water and for prevention of any watermarks on the wells, which may have interfered with the subsequent steps.Pictures were taken with the FluorChem Q system (Biotechne, Minneapolis, MN, USA) and analyzed using the Colony Assay program in Image J (NIH, Bethesda, MD, USA).The threshold of exposure was always adjusted before the complete analysis to ensure elimination of background signals. Chromatin Immunoprecipitation (ChIP) ChIP was performed using the Magna ChIP™ HiSens Kit (Merck, Darmstadt, Germany) according to the manufacturer's instructions.Cell lysates were sonicated for 24 pulses at 100% output, and immunoprecipitation of the DNA-protein complexes was performed using Magna ChIP A/G beads (Merck, Darmstadt, Germany) overnight at 4 • C followed by DNA isolation.Mouse IgG and anti-RNA polymerase II antibodies (both obtained from Merck Millipore, Burlington, MA, USA) were used as negative and positive controls, respectively.The binding of MACC1 to the LGR5 gene promoter (LGR5 gene promoter forward primer: 5 ′ -TCACTTCGACTTCCTCACCC-3 ′ and reverse primer: 5 ′ -CACTGTCTGGCTCGCTTTTG-3 ′ ) was evaluated via specific qRT-PCR. Patient Sample Analysis We analyzed tissue specimens from 59 patients suffering from CRC (with informed written consent and approval by Charité Ethics Committee, Charité University Medicine, Berlin, Germany), which were used in our previous study [4].All patients did not have a history of familial colon cancer and did not suffer from a second tumor of the same or a different entity.All patients were staged I, II or III (not distantly metastasized at the time point of surgery).They were previously untreated, and the patients' tumors were surgically R0-resected (complete resection with no microscopic residual tumor).Ethical approval and patient consent to participate: All analyses were carried out in accordance with the guidelines approved by the institutional review board, number AA3/03/45, of the Charité-Universitätsmedizin Berlin, Germany.All patients gave written informed consent and the authors complied with all relevant ethical regulations for research involving human participants. Statistical Analysis All calculations and statistical analyses were performed with GraphPad Prism version 5.01.Comparisons of two groups were performed by two-tailed, paired Student's t-tests.More than two groups were compared using ANOVA and appropriate post-tests.Correlations between MACC1 and LGR5 in cell lines and patient samples were evaluated by using the Spearman-rho test.All tests were two-sided, and p values ≤ 0.05 were considered to be statistically significant (* p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001).Error bars represent the standard deviations.Whisker boxes show the means, minimums, maximums, and 1st and 3rd quartiles. We then knocked down MACC1 in these PDO cells and observed a subsequent LGR5 downregulation, whereas CD44 was not affected.Further, we investigated the effect of MACC1 knockdown on phenotypical stemness features and observed a clearly reduced tumor sphere formation (Figure 2). Expressions of MACC1 and LGR5 Are Elevated in ALDH1-Positive Stem Cell-Enriched Populations of Patient-Derived 3D Organoids In order to prove the link between MACC1 and stemness properties, five PDOs were sorted for ALDH1 (aldehyde dehydrogenase 1; stem cell marker and therapeutic target; [22,23]) -negative and -positive subpopulations.Interestingly, MACC1 as well as LGR5 mRNA levels were increased in ALDH1-positive cells (vs.ALDH1-negative cells; Figure 4a).Further, we also generated patient-derived xenografted mice (PDXs) from these PDOs.When sorting for ALDH1-negative and -positive subpopulations, we observed an increased expression of MACC1 in four out of five models in ALDH1-positive cells (Figure 4b).Taken together, RNA sequencing performed on PDO and PDX models demonstrated increased MACC1 expression in ALDH1-positive CSCs, highlighting its involvement in cancer stemness.We then knocked down MACC1 in these PDO cells and observed a subsequent LGR5 downregulation, whereas CD44 was not affected.Further, we investigated the effect of MACC1 knockdown on phenotypical stemness features and observed a clearly reduced tumor sphere formation (Figure 2). The MACC1-Stemness Marker Link in CRC 2D Cell Lines Next, we tested the link between MACC1 and stemness in the 2D CRC cell lines SW480 (Figure 5a), HCT116 (Figure 5c,d) and SW620 (Figure 5b) with low, moderate and high endogenous MACC1 expression.Forced expression of MACC1 in endogenously low-MACC1-expressing SW480 cells led to increased expression of NANOG and LGR5.Knockout of MACC1 in the endogenously high MACC1 SW620 cells reduced NANOG and LGR5.Additionally, the expression reduction in LGR5 upon MACC1 knockout was rescued by overexpressing MACC1 in these knockout clones.This validated the previous findings, where we showed MACC1-dependent expression regulation of stem cell factors.In the endogenously moderately expressing HCT116 cells, forced expression of MACC1 resulted in increased NANOG and LGR5 expression, whereas MACC1 knockout diminished NANOG and LRG5 levels.Thus, we confirmed the MACC1-stemness link also in 2D CRC cell lines (Figure 5a-d). Expressions of MACC1 and LGR5 Are Elevated in ALDH1-Positive Stem Cell-Enriched Populations of Patient-Derived 3D Organoids In order to prove the link between MACC1 and stemness properties, five PDOs were sorted for ALDH1 (aldehyde dehydrogenase 1; stem cell marker and therapeutic target; 4a).Further, we also generated patient-derived xenografted mice (PDXs) from these PDOs.When sorting for ALDH1-negative and -positive subpopulations, we observed an increased expression of MACC1 in four out of five models in ALDH1-positive cells (Figure 4b).Taken together, RNA sequencing performed on PDO and PDX models demonstrated increased MACC1 expression in ALDH1-positive CSCs, highlighting its involvement in cancer stemness. Back Translation of the MACC1-LGR5 Association in CRC-PDX Models Four tumors from CRC-PDXs were dissociated and CD44-low and CD44-high cell populations were sorted.Thereafter, we determined the specific gene expression levels for Back Translation of the MACC1-LGR5 Association in CRC-PDX Models Four tumors from CRC-PDXs were dissociated and CD44-low and CD44-high cell populations were sorted.Thereafter, we determined the specific gene expression levels for MACC1 and LGR5 in unsorted, CD44-low and CD44-high PDX-derived cells.Interestingly, we observed significantly higher MACC1 and LGR5 expression levels in CD44-high cells, confirming the discovered MACC1-LGR5 association also in passaged CRC-PDX-derived cells (Figure 7). MACC1 Increases Cancer Stemness via Transcriptional Activation of the LGR5 Target Gene We then explored if MACC1 directly regulates the expression of LGR5 via the LGR5 promoter (Figure 8a).Using ChIP, we found a clear binding of MACC1 to the LGR5 promoter in SW620 cells, making LGR5 a direct target gene of MACC1 (Figure 8b).This finding substantiates the previous data which showed MACC1-dependent regulation of MACC1 Increases Cancer Stemness via Transcriptional Activation of the LGR5 Target Gene We then explored if MACC1 directly regulates the expression of LGR5 via the LGR5 promoter (Figure 8a).Using ChIP, we found a clear binding of MACC1 to the LGR5 promoter in SW620 cells, making LGR5 a direct target gene of MACC1 (Figure 8b).This finding substantiates the previous data which showed MACC1-dependent regulation of stemness and CSC-like phenotypes in both PDO and PDX models, as well as in 2D CRC cell-line models. stemness and CSC-like phenotypes in both PDO and PDX models, as well as in 2D CRC cell-line models. Correlation of MACC1 with LGR5 Expression in Different CRC Patient Cohorts Finally, we proved the correlation of MACC1 with LGR5 expression in different cohorts of CRC patients (Figure 9).We started with a first cohort of 59 CRC patients, for which MACC1 expression is already published [2] and for which we now measured the LGR5 expression in the 59 patients.We found a significant MACC1-LGR5 correlation (p < 0.0001) with an r = 0.5169 (Figure 9a).In additional already published CRC cohorts [24][25][26], we also found significant MACC1-LGR5 expression correlations (p = 0.0259, p = 0.0473 and p < 0.0001) with Spearman r = 0.3031, r = 0.5659 and r = 0.7273 values (Figure 9b).Strikingly, MACC1 and LGR5 expression showed moderate to strong correlation in CRC patients from different cohorts.Additionally, patients with high MACC1 mRNA expression had a significantly higher expression of LGR5 mRNA, as determined by qRT-PCR. Correlation of MACC1 with LGR5 Expression in Different CRC Patient Cohorts Finally, we proved the correlation of MACC1 with LGR5 expression in different cohorts of CRC patients (Figure 9).We started with a first cohort of 59 CRC patients, for which MACC1 expression is already published [2] and for which we now measured the LGR5 expression in the 59 patients.We found a significant MACC1-LGR5 correlation (p < 0.0001) with an r = 0.5169 (Figure 9a).In additional already published CRC cohorts [24][25][26], we also found significant MACC1-LGR5 expression correlations (p = 0.0259, p = 0.0473 and p < 0.0001) with Spearman r = 0.3031, r = 0.5659 and r = 0.7273 values (Figure 9b).Strikingly, MACC1 and LGR5 expression showed moderate to strong correlation in CRC patients from different cohorts.Additionally, patients with high MACC1 mRNA expression had a significantly higher expression of LGR5 mRNA, as determined by qRT-PCR. Discussion Here, we report for the first time the link of MACC1, which is known to induce tumor initiation, progression and metastasis, with stemness by transcriptional expression activation of LGR5, a central stemness gene.We proved this link by using 2D and 3D cultures of human CRC cell lines, CRC-PDX mouse models and human CRC patient samples.First, we found higher expressions of MACC1 and stemness genes like LGR5 in CD44-high and Discussion Here, we report for the first time the link of MACC1, which is known to induce tumor initiation, progression and metastasis, with stemness by transcriptional expression activation of LGR5, a central stemness gene.We proved this link by using 2D and 3D cultures of human CRC cell lines, CRC-PDX mouse models and human CRC patient samples.First, we found higher expressions of MACC1 and stemness genes like LGR5 in CD44-high and ALDH1-positive stem cell populations.Interestingly, we verified an association of MACC1 and LGR5 by direct binding of MACC1 as a transcription factor to the LGR5 gene promoter.We supported this link by forced expression, knockdown or knockout of MACC1, followed by subsequent corresponding expression of stemness genes such as LGR5, as well as phenotypic features like the initiation of tumor sphere formation and clonogenicity.This newly discovered context was confirmed by MACC1-LGR5 expressions in four different CRC patient cohorts, newly described or publicly available.Taken together, MACC1 promotes cancer stem cell-like properties in CRC via employment of LGR5 as a novel signaling mediator. In general, MACC1 has already been described in the stemness context in different cancer entities.In CRC, this association was analyzed in several studies [27][28][29][30], demonstrating the involvement of FoxA3, miR-3163 and DCLK1 able to phosphorylate MACC1.Targeting MACC1 in CRC not only affects known MACC1-induced features such as migration and invasion, but also influences stemness-associated phenotypes.In cervical cancer, MACC1 regulates the AKT/STAT3 signaling pathway to induce migration and invasion, but also cancer stemness [31].In lung cancer, long noncoding RNA MACC1-AS1 was described to promote stemness through promoting UPF1-mediated destabilization of LATS1/2 [32].The same long noncoding RNA MACC1-AS1 was also described to promote stemness by antagonizing miR-145 in hepatocellular carcinoma cells [33], via suppressing miR-145-mediated inhibition of the SMAD2/MACC1-AS1 axis in nasopharyngeal carcinoma [34], and promotes stemness and chemoresistance through fatty acid oxidation in gastric cancer [35]. Merlos-Suarez et al. reported the intestinal stem cell signature to identify CSC by LGR5 and to predict disease relapse.Interestingly, they noted the tumor-promoting and metastasis-inducing MACC1 to be upregulated in LGR5-positive stem cells of the mouse intestine (supplement in [13]).In this study, we enriched stem cell populations by CD44 and also by ALDH1.We not only showed the association of MACC1 and LGR5 expression, but also demonstrated the molecular mechanism by binding of MACC1 to the gene promoter of LGR5 initiating its expression and inducing stem cell properties like tumor sphere formation. Since MACC1 is decisive for metastasis induction, the importance of the stemness gene LGR5 in cancer metastasis is crucial to extract.The causal link of LGR5 and cancer metastasis has been demonstrated by de Sousa e Melo et al. [36].The authors developed an orthotopic mouse model and injected organoids directly into the colon mucosa, leading to tumors disseminated primarily to the liver.FACS analysis revealed enrichment of LGR5-GFP+ cells in micro-metastasis (5 weeks after injection) compared to both primary tumors and macro-metastasis (6 weeks after injection), suggesting that dissemination and/or colonization of distant sites may originate from LGR5+ CSCs.Further, the requirement of LGR5+ CSCs was not only shown for the development of liver metastasis, but also for the maintenance of established liver metastasis. Indeed, there are several biological features supporting the MACC1-LGR5 link, i.e., the involvement of MACC1 and of LGR5 in clathrin-mediated endocytosis [37,38], their localization at the tumor invasion front [39,40], their role in core clock regulation [41,42] and their impact on craniofacial development [43,44].Further, the regulation of both genes shares defined signaling pathways, such as the Wnt signaling pathway [45,46].Importantly, both genes are of prognostic value for CRC [3,47], and, in addition, several other solid tumor entities are reported with crucial roles of both genes for patient prognosis, such as gastric, breast, ovarian, pancreas, intrahepatic cholangiocarcinoma, neuroblastoma and nasopharyngeal cancer. Conclusions Taken together, this newly discovered MACC1-LGR5 association contributes to better understanding of the stemness-tumor progression/metastasis link.This study indicates that the metastasis inducer MACC1 acts not only as a cancer stem cell-associated marker, but also as a regulator of LGR5 expression and LGR5-mediated stem cell properties.Since CSCs are believed to be responsible for tumor metastasis and relapse, interventional combinatorial approaches targeting MACC1 [48,49] and LGR5 [50,51] will potentially improve further targeted therapies for CRC patients to eradicate CSCs and prevent cancer recurrence and distant metastasis formation. Funding: This research was supported by the German Cancer Consortium (DKTK), the Oncotrack Consortium, the Berlin School of Integrative Oncology (BSIO) and the Berlin Cancer Society (BKG). Institutional Review Board Statement: PDX tumor models were generated as described by Schütte et al. [15], and the use of patient material for PDX was approved by the local Institutional Review Board of Charité-Universitätsmedizin (Charité Ethics; EA 1/069/11) and the ethics committee of the Medical University of Graz (Ethics Commission of the Medical University of Graz, Auenbruggerplatz 2, 8036 Graz, Austria) and was confirmed by the ethics committee of the St John of God Hospital Graz (23-015 ex 10/11).We analyzed tissue specimens from 59 patients suffering from CRC (with informed written consent and approval by the Charité Ethics Committee, Charité University Medicine, Berlin, Germany), which were used in our previous study [4].All patients did not have a history of familial colon cancer and did not suffer from a second tumor of the same or a different entity.All patients were staged I, II or III (not distantly metastasized at the time point of surgery).They were previously untreated, and the patients' tumors were surgically R0-resected (complete resection with no microscopic residual tumor).Ethical approval and patient consent to participate: All analyses were carried out in accordance with the guidelines approved by the institutional review board, number AA3/03/45, of the Charité-Universitätsmedizin Berlin, Germany.All patients gave written informed consent and the authors complied with all relevant ethical regulations for research involving human participants. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Figure 2 . Figure 2. Knockdown of MACC1 in PDOs resulted in reduced LGR5 levels and cancer stemness phenotype.(a) Shown here is the effect of MACC1 knockdown of a total of four patient samples.(b) Illustrated here is the impact of MACC1 knockdown on the tumor sphere formation ability of a total of four PDO models.(* = p < 0.05, ** = p < 0.01). Figure 2 . Figure 2. Knockdown of MACC1 in PDOs resulted in reduced LGR5 levels and cancer stemness phenotype.(a) Shown here is the effect of MACC1 knockdown of a total of four patient samples.(b) Illustrated here is the impact of MACC1 knockdown on the tumor sphere formation ability of a total of four PDO models.(* = p < 0.05, ** = p < 0.01). Figure 8 . Figure 8. MACC1 increases cancer stemness via transcriptional activation of the LGR5 target gene.(a) Schematic representation of the full-length human LGR5 gene promoter region and the binding region for MACC1 detected by the respective primers.(b) ChIP for analysis of direct binding of MACC1 to the human LGR5 promoter region in SW620 cells.(** = p < 0.01). Figure 8 . Figure 8. MACC1 increases cancer stemness via transcriptional activation of the LGR5 target gene.(a) Schematic representation of the full-length human LGR5 gene promoter region and the binding region for MACC1 detected by the respective primers.(b) ChIP for analysis of direct binding of MACC1 to the human LGR5 promoter region in SW620 cells.(** = p < 0.01). Figure 9 . Figure 9. MACC1 expression correlates with LGR5 in CRC patients.Correlation analyses of MACC1 and LGR5 from CRC patient tumors.(a) MACC1 and LGR5 mRNA levels were determined by qRT-PCR and calculated as folds of the calibrator.Shown here is the LGR5 expression analysis of MACC1-low and MACC1-high patients.Classification of patients as low and high MACC1 expressers was performed using MACC1 median expression as a cutoff.(b) Datasets from other research groups were obtained from microarray analyses followed by normalization and filtering of the raw data using different algorithms.Correlation analysis was performed with nonparametric Spearman correlation [24-26].** p < 0.01. Figure 9 . Figure 9. MACC1 expression correlates with LGR5 in CRC patients.Correlation analyses of MACC1 and LGR5 from CRC patient tumors.(a) MACC1 and LGR5 mRNA levels were determined by qRT-PCR and calculated as folds of the calibrator.Shown here is the LGR5 expression analysis of MACC1-low and MACC1-high patients.Classification of patients as low and high MACC1 expressers was performed using MACC1 median expression as a cutoff.(b) Datasets from other research groups were obtained from microarray analyses followed by normalization and filtering of the raw data using different algorithms.Correlation analysis was performed with nonparametric Spearman correlation [24-26].** p < 0.01.
2024-02-02T16:15:13.607Z
2024-01-31T00:00:00.000
{ "year": 2024, "sha1": "e257f1dee57463b48e746b4db6192836d0c25e62", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/16/3/604/pdf?version=1706686992", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f5e7d1d845b0e1a8e13010d952c1b8c7adf80d9", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
14868843
pes2o/s2orc
v3-fos-license
Abstract kinetic equations with positive collision operators We consider"forward-backward"parabolic equations in the abstract form $Jd \psi / d x + L \psi = 0$, $ 0<x<\tau \leq \infty$, where $J$ and $L$ are operators in a Hilbert space $H$ such that $J=J^*=J^{-1}$, $L=L^* \geq 0$, and $\ker L = 0$. The following theorem is proved: if the operator $B=JL$ is similar to a self-adjoint operator, then associated half-range boundary problems have unique solutions. We apply this theorem to corresponding nonhomogeneous equations, to the time-independent Fokker-Plank equation $ \mu \frac {\partial \psi}{\partial x} (x,\mu) = b(\mu) \frac {\partial^2 \psi}{\partial \mu^2} (x, \mu)$, $ 0<x<\tau$, $ \mu \in \R$, as well as to other parabolic equations of the"forward-backward"type. The abstract kinetic equation $ T d \psi/dx = - A \psi (x) + f(x)$, where $T=T^*$ is injective and $A$ satisfies a certain positivity assumption, is considered also. In the case τ < ∞, problem (1.7), (1.2) was studied in [53] with α = 0. In the case τ = ∞ (the half-space problem), one has a boundary condition of the type (1.2) at x = 0 and, in addition, a growth condition on ψ(x, µ) for large x. The half-space problem for Eq. (1.7) was considered in [43,44,17] in connection with stationary equations of Brownian motion. Note that the methods of [43,44,17,53] use the special form of the weight w and corresponding integral transforms. The results achieved in [44] for the sample case α = 1 was used in [45], where a wider class of problems was considered under the hypotheses that the weight w is bounded (1.8) and w(µ) = µ+o(µ) as µ → 0. However, all these results were obtained under additional assumptions on the boundary data. In particular, it was supposed that ϕ ± are continuous. The case when L may be unbounded and may have a continuous spectrum was considered in [16,Section 4], where the half-space problem is studied (in an abstract setting) under the assumptions (1.8) and L > δ > 0. This assumption was changed to L > 0 in [7]. However, it is difficult to apply the results of [7] to equation (1.7) since an additional assumption on the boundary values φ ± appears. This assumption is close to assumption of [43,44,45]. The method of [16,7] is based on the spectral theory of self-adjoint operators in Krein spaces. The aim of this paper is to modify the Krein space approach of [16,7] and to prove that problem (1.4)-(1.5) has a unique solution for arbitrary ϕ ± ∈ H ± . In particular, it will be shown that the problem (1.7), (1.2) has a unique solution for arbitrary ϕ ± ∈ L 2 (R ± , |µ| α dµ). More general equations of the Fokker-Plank type will be considered also. Recall that two closed operators T 1 and T 2 in a Hilbert space H are called similar if there exist a bounded and boundedly invertible operator S in H such that S dom(T 1 ) = dom(T 2 ) and The central result is the following theorem. The proof is given in Subsection 2.2. The half-space problem (τ = ∞) is considered in Subsection 2.3. In Section 3 we consider correctness and nonhomogeneous equations. The formal similarity between Eqs. (1.1)-(1.2) and certain problems of neutron transport, radiative transfer, and rarefied gas dynamics has given rise to the emergence of abstract kinetic equation see [19,4,40,21,18,16] and references therein. When Eq. (1.9) is considered in a Hilbert space H, the operator T is self-adjoint and injective. The operator A is called a collision operator, usually it satisfies certain positivity assumptions (see e.g. [18]). For unbounded collision operators, equation (1.9) is usually considered in the space H T that is a completion of dom(T ) with respect to (w.r.t.) the scalar product ·, · T := (|T |·, ·) H . The interplay of dom(T ) and dom(A) may be various. This leads to additional assumptions on the operators T and A. It is assumed in [16,Section 4] that T is bounded and A > δ > 0; in [7], ran(T ) ⊂ ran(A). Note that equation (1.7) can not be included in these settings. The second goal of the present paper is to remove the assumptions mentioned above. We will show that the following condition is natural for the case when A is unbounded: the operator A is a positive self-adjoint operator from H T to the space H ′ T that is a completion of dom(T −1 ) w.r.t. the scalar product (|T | −1 ·, ·) H , see Section 4 for details. It is weaker than the assumptions mentioned above. On the other hand, it characterizes the case when equation (1.9) may be reduced to equation (1.4). Theorem 1.1 leads to the similarity problem for J-positive differential operators. In Section 5, we use recent results concerning the similarity [8,15,29,34,30,35,36,28] (see also [23,24,12] and references in [30,28]) to prove uniqueness and existence theorems for various equations of the type (1.1). Note also that abstract kinetic equations with nonsymmetric collision operators may be found in [21,18,16,7,41,51]. From other point of view, equation (1.1) belongs to the class of second order equations with nonnegative characteristic form. Boundary problems for this class of equations were considered by various authors (see [33,42] and references). But some restrictions imposed in this theory makes it inapplicable to Eq. (1.1) (see a discussion in [45]). The case when w is dependent of µ or the operator L is dependent of x was considered, e.g., in [1,48]. The main results of this paper were announced in the short communications [26,25]. Notation. Let A be a linear operator from a Banach space H 1 to a Banach space H 2 . In what follows, dom(A), ker A, ran A, A H 1 →H 2 are the domain, kernel, range, and norm of A, respectively. If M is a subset of H 1 , then AM := {Ah : h ∈ M}. In the case H 1 = H 2 , σ(A) and ρ(A) denote the spectrum and the resolvent set of A, respectively. As usual, σ disc (A) denotes the discrete spectrum of A, that is, the set of isolated eigenvalues of finite algebraic multiplicity; the essential spectrum is σ ess (A) := σ(A) \ σ disc (A). By E A (·) we denote the spectral function of a self-adjoint (or J-selfadjoint) operator A. We write f ∈ AC loc (R) if the function f is absolutely continuous on each bounded interval in R. Put R + := (0, +∞), R − := (−∞, 0), and R := R ∪ ∞. 2 Existence and uniqueness of solutions Preliminaries In this section basic facts from the theory of operators in Krein spaces are collected. The reader can find more details in [2,38]. Suppose that H = H + ⊕ H − , where H + and H − are (closed) subspaces of H. Denote by P ± the orthogonal projections from H onto H ± . Let J = P + − P − and [·, ·] := (J·, ·) H . Then the pair K = (H, [·, ·]) is called a Krein space (see [2,38] for the original definition). The form [·, ·] is called an inner product in the Krein space K and the operator J is called a fundamental symmetry (or a signature operator ) in the Krein Let H = H + ⊕ H − be a canonical decomposition. Then the norm · H ± := ±[·, ·] in the Hilbert space (H ± , ±[·, ·]) is called an intrinsic norm. It is easy to prove (see [2,Theorem I.7.19]) that the norms · H ± and · H are equivalent on H ± ; moreover, where γ ∈ (0, 1] is a constant. Statement (i) of the following proposition is due to Ginzburg (see [ Proposition 2.1 (e.g. [2]). Let H = H + ⊕ H − be a canonical decomposition and let P + and P − be corresponding mutually complementary projections on H + and H − , respectively. a homeomorphism, that is, it is bijective, continuous, and the inverse mapping (P + ↾ H 1 ) −1 : Let A be a densely defined operator in H. The J-adjoint operator of A is defined by the relation Let S be the semiring consisting of all bounded intervals with endpoints different from 0 and their complements in R := R ∪ ∞. If these assertions hold and P B + and P B − are corresponding mutually complementary projections on Proof. Positivity of B + follows immediately from J-positivity of B. B + is symmetric since it is positive. It follows from (2.2) that the spectrum of B is real, σ(B) ⊂ R. Thus, σ(B + ) ⊂ R and therefore B + is self-adjoint in H B + . The same arguments hold for B − . Now we write equation (1.4) in the form 4) and suppose that ψ is a solution of (2.4), (1.5). We put ψ ± (x) : The integrals converge in the norm topologies of H B ± as well as in the norm topology of H. Recall that, by (2.1), these topologies are equivalent. It follows immediately from Proposition 2.3 that for all h ∈ H B ± , The boundary conditions (1.5) becomes It follows from (2.3) and Proposition 2.1 (i) that there exist operators Using the operators R ± and G ± , we write (2.8) in the form Combining these equations, one gets Proof. Let us prove that for h ± ∈ H B ± , with certain constants β ± < 1. (2.12) with a certain β + < 1. Further, Therefore (2.14) yields (2.17) Remark 2.6. Lemma 2.5 was obtained in other form in [7] (see Lemma 2.2 and the end of the proof of Theorem 3.4 there). Earlier, it was proved under the additional condition σ(B) = σ disc (B) in [6], see also [49,18]. We give another proof, which is based on Lemma 2.4 and is an improvement of treatments from [49]. This lemma shows that the function is a unique solution of problem (1.4)-(1.5). One can check that the functions ψ ± are continuous on [0, τ ], the strong derivatives dψ ± /dx exist and are (strongly) continuous on (0, τ ) (see Subsection 3). This completes the proof of Theorem 1.1. Half-space problems Under the same assumptions, let us consider the equation on the infinite interval (0, +∞). The boundary conditions corresponds to this feature of the problem (see e.g. [18]). As above, ϕ + ∈ H + is a given vector. 22) where E B t , R + , and B + are the operators defined in Subsection 2.2. The proof is simpler than the proof of Theorem 1.1. It is similar to the treatments from [16,Section 4], where equation (1.9) with bounded T and uniformly positive A was considered. We give a sketch here. Correctness and nonhomogeneous problems Let the assumptions of Subsection 2.2 be fulfilled. Since B + is a positive self-adjoint operator in H B + , we see that U + (z) := e −zB + is a bounded holomorphic semigroup in the sector | arg z| < π/2 (see e.g. [31, Subsection IX.1.6]). The same is true for the function U − (z) := e zB − . In particular, this implies that for any ψ ± ∈ H B ± problems (i) ψ are infinitely differentiable on (0, τ ), One can obtain similar statements for the solution of problem (2.19)-(2.21). Now consider the nonhomogeneous equation where f is an H-valued function. Abstract kinetic equations Let H be a complex Hilbert space with scalar product ·, · and norm · H . Assume that T is a (bounded or unbounded) self-adjoint operator in H and that T is injective (i.e., ker T = 0). Let Q + := E T (R + ) (Q − := E T (R − )) be the orthogonal projection of H onto the maximal Tinvariant subspace on which T is positive (negative). Then |T | := (Q + −Q − )T is a positive self-adjoint operator. Note that Following [4], let us introduce the scalar product h, g T = |T |h, g for h, g ∈ dom(T ) with corresponding norm · T and denote by H T the completion of dom(T ) with respect to (w.r.t.) this norm. Clearly, H ∩ H T = dom(|T | 1/2 ) and h T = |T | 1/2 h H for h ∈ dom(|T | 1/2 ). It is easy to see that for each g ∈ ran(|T |) = dom(T −1 ), the linear functional ·, g is continuous on dom(T ) w.r.t. the norm · T . Besides, its norm is equal to So one can use the H-scalar product as a pairing to identify H ′ T with the dual H * T of H T . The operator |T | (|T | −1 ) has a natural isometric extension from H T onto H ′ T (from H ′ T onto H T ). We use the same notation for the extensions. By supplemented by "half-range" boundary conditions in the form In the abstract kinetic theory the operator A is called a collision operator. It may have any of a number of properties (see [21,18]). Here we consider the case when A is a positive self-adjoint operator from H T to H ′ T . That is, We seek H T -strong solutions (week solutions in terms of [18, Section 2]) of problem (4.2)-(4.4).That is, it is supposed that d/dx is the strong derivative in H T and that Proof. Put P ± := Q ± ↾ H T . Then P + and P − are mutually complementary orthogonal projections in H T and J = P + − P − is a signature operator. Note that L := JB is a positive self-adjoint operator in H T (that is, B is J-positive and J-selfadjoint). Indeed, since L = (Q + − Q − )T −1 A and (Q + − Q − )T ±1 = |T | ±1 , we get Examples If the spectrum σ(B) is real and discrete, then assumption (2.2) is equivalent to the Riesz basis property for eigenfunctions of B. For ordinary and partial differential operators with indefinite weights, the Riesz basis property was studied in great detail (see [22,49,5,8,14,50,47] and references therein). Below we consider several classes of differential equations with B(= JL) such that σ(B) = σ disc (B). The theorems obtained in the previous sections are combined with known similarity results for Sturm-Liouville operators with an indefinite weight. First, we consider in details a nonhomogeneous version of equation (1.7). Other applications will be indicated briefly (for homogeneous equations and the case τ < ∞ only). Using Propositions 3.1 and 3.2, one can extend these treatments on the half-space problems and the nonhomogeneous case. The equation Let us consider the equation where α > −1 is a constant. In the case τ < ∞, the associated boundary conditions take the form If τ = ∞, we should change them to To write (5.1) in the form (4.2), one can put H = L 2 (R), (T y)(µ) = (sgn µ)|µ| α y(µ) and A : y → −y ′′ . Then, H T = L 2 (R, |µ| α dµ), H ′ T = L 2 (R, |µ| −α dµ). It is assumed that A is an operator from H T to H ′ T and that it is defined on the natural domain dom(A) = {y ∈ H T : y, y ′ ∈ AC loc (R) and y ′′ ∈ H ′ T }. One can find the operators Q ± (see Section 4) and check that J := (Q + − Q − ) ↾ H T coincides with J defined by (1.6). Consider the operators B := T −1 A and L := JB. Both operators are defined on dom(A) by the following differential expressions Clearly, By, Ly ∈ H T for all y ∈ dom(A). So B and L are operators in H T . It is easy to check (see e.g. [15]) that L is a positive self-adjoint operator in H T . It was proved in [15] (see also [10,23,34]) that the operator B is similar to a self-adjoint operator in the Hilbert space L 2 (R, |µ| α dµ). By Theorem 4.1, we obtain the following result. The case when L is uniformly positive Consider equation (1.4) under the assumption L = L * > δ > 0, i.e., the operator L is uniformly positive in the Hilbert space H. As before, put B = JL, where J is a signature operator in H. In this case, B is similar to a self-adjoint operator iff ∞ is not a singular critical point of B (see Proposition 2.2). For ordinary differential operators with indefinite weights, the regularity of the critical point ∞ is well studied even in the case of a finite number of turning points (i.e., the points where the weight w changes sign). We will use one result that follows from [8]. The following definition is an improved version of Beals' condition [5]. Definition 5.2 ([8]). A function w is said to be simple from the right at µ 0 if there exist δ > 0 such that w is nonnegative (nonpositive) on [µ 0 , µ 0 + δ] and A function w is said to be simple at µ 0 if it is simple from the right and simple from the left at µ (with, possibly, different numbers β). Remark 5.5. The half-space problem for the equation (5.8) with α = 1 and k > 0 was studied in [44]. Note that if α > 0, the operator L associated with Eq. (5.8) is not uniformly positive. To extend the method of the present paper to the case α > 0, one should prove that 0 is a regular critical point of the operator B : y → (sgn µ)|µ| −α (−y ′′ + ky) (cf. Subsection 5.3). Improvements of condition (5.6) may be found in [8] (the end of Subsection 3.1) and [14]. Note also that [8, Theorem 3.6] is valid for higher order ordinary differential operators. The Fokker-Plank equation In the case when inf σ ess (L) = 0, the similarity problem for the operator B : y → 1 w (−(py ′ ) ′ + qy) is more difficult. This question was considered in [10,15,23,12,29,34,30,35,27] (see also references therein). A general method was developed in [29,30], where the operator B with w(µ) = sgn µ and p(µ) ≡ 1 was studied. However, all results contained in [30, are valid without changes in proofs for the general Sturm-Liouville operator B with one turning point. The approach of [30] was applied to the case q ≡ 0 in [35,36,28], where the following theorem was proved. The author express his gratitude to V.A. Derkach, who drew author's attention to the papers [5,32]
2007-10-02T22:54:41.000Z
2007-08-18T00:00:00.000
{ "year": 2007, "sha1": "b43aefdd304388bc6e1e00e22d7a3a892bc62cd9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0708.2510", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b43aefdd304388bc6e1e00e22d7a3a892bc62cd9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
80060688
pes2o/s2orc
v3-fos-license
Effect of short-term prewarming on body temperature in arthroscopic shoulder surgery This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright c the Korean Society of Anesthesiologists, 2017 Effect of short-term prewarming on body temperature in arthroscopic shoulder surgery Background: Hypothermia (< 36°C) is common during arthroscopic shoulder surgery. It is known that 30 to 60 minutes of prewarming can prevent perioperative hypothermia by decreasing body heat redistribution. However, the effect of short-term prewarming (less than 30 minutes) on body temperature in such surgery has not been reported yet. Therefore, the aim of this prospective study was to investigate the effect of short-term prewarming for less than 30 minutes using forced-air warming device on body temperature during interscalene brachial plexus block (ISBPB) procedure in arthroscopic shoulder surgery before general anesthesia. Methods: We randomly assigned patients scheduled for arthroscopic shoulder surgery to receive either cotton blanket (not pre-warmed, group C, n = 26) or forced-air warming device (pre-warmed, group F, n = 26). Temperature was recorded every 15 minutes from entering the operating room until leaving post-anesthetic care unit (PACU). Shivering and thermal comfort scale were evaluated during their stay in the PACU. Results: There were significant differences in body temperature between group C and group F from 30 minutes after induction of general anesthesia to 30 minutes after arrival in the PACU (P < 0.05). The median duration of prewarming in group F was 14 min (range: 9-23 min). There was no significant difference in thermal comfort scale or shivering between the two groups in PACU. Conclusions: Our results showed that short-term prewarming using a forced-air warming device during ISBPB in arthroscopic shoulder surgery had beneficial effect on perioperative hypothermia. Preventive methods include skin surface warming [9], warm and humidified circuit [10], and administering fluid or warming blood using specific devices [11]. Among these methods, forced-air warming is a very effective and safe method [12]. Forbes et al. [13] have recommended prewarming for 30 minutes before surgery to prevent perioperative hypothermia. Sessler et al. [14] have concluded that peripheral compartment heat content is increased in clinically important amounts within 30 minutes of warming. However, it is impractical to warm for more than 30 minutes due to thermal discomfort and congestion of pre-anesthetic care unit. The effect of short-term prewarming (less than 30 minutes) on body temperature during arthroscopic shoulder surgery has not reported yet. Therefore, the aim of this prospective study was to investigate the effect of short-term prewarming using forced-air warming device on body temperature during interscalene brachial plexus block (ISBPB) in arthroscopic shoulder surgery before general anesthesia. MATERIALS AND METHoDS This study was approved by the Institutional Review Board. RESULTS For this study, 54 patients were enrolled initially. However, 2 patients were excluded due to delayed recovery from general anesthesia and conversion to open surgery (Fig. 1). There was no significant difference in age, operation room temperature, operation time, amount of intravenous fluid, volume of irrigation fluid, or duration of ISBPB procedure between the two groups ( Table 1). The median duration of prewarming in group F was 14 min (range: 9-23 min). The temperature on arrival at operation room did not differ significantly between the two groups (36.6 ± 0.4°C in group C and 36.8 ± 0.4°C in group F, P = 0.156, Table 2). There were significant differences in body temperature between group C and group F from 30 minutes after induction of general anesthesia to 30 minutes after arrival in the PACU (P = 0.039 at 30 minutes after induction; P = 0.003 at 45 minutes after induction; P = 0.001 at 60 minutes after induction; P = 0.004 at the end of surgery; P < 0.001 on arrival at PACU; P < 0.001 at 30 minutes after admission to PACU, Fig. 2). The temperature on arrival at PACU for patients in group C was significantly lower than that for patients in group F (35.6 ± 0.4°C in group C and 36.1 ± 0.4°C in group F, P < 0.001, Table 2). The incidence of hypothermia in the operation room was 96.2% (25/26) in group C and 57.7% (15/26) in group F (P = 0.003, Table 2). The incidence of hypothermia in PACU was 69.2% (18/26) in group C and 34.6% (9/26) in group F (P = 0.026, Table 2). Twenty percent of patients in F group and 40% of patients in group C showed moderate hypothermia intraoperatively. However, severities of hypothermia were not significantly different between the two groups (P = 0.239, Table 2). Percentages of patients who felt moderate cold were not significantly different between the two groups (26.9% in group C and 19.2% in group F, P = 0.743, Table 2). No shivering occurred in either group during stay in PACU. There was no significant difference in NRS pain score between the two groups. DISCUSSIoN In the present study, prewarming for less than 30 minutes Values are presented as mean ± SD or number of patients. Group C: patients were covered with warm cotton blanket during interscalene brachial plexus block, Group F: patients were warmed by forced-air warming during interscalene brachial plexus block. OR: operation room. phases. The first phase shows rapid decrease in temperature due to core-to-peripheral redistribution of body heat within 1 hour after induction of general anesthesia. In the second phase, a slow and linear fashion of decrease will occur for 2 to 4 hours simply due to heat loss exceeding metabolic heat production. In the third phase, core temperature will reach a plateau and remain constantly until the end of the surgery at 3 to 4 hours after the induction of general anesthesia [15]. Vanni et al. [16] have reported that it is as more effective to perform both preoperative and intraoperative warming than intraoperative warming alone during operation. National Institute for Health and Care Excellence clinical guideline 65 recommends prewarming to prevent perioperative hypothermia. However, prewarming it is not routinely performed due to practical restrictions. It is easy to overlook temperature of patient when performing regional anesthesia for postoperative analgesia before induction of general anesthesia. Horn et al. [17] have demonstrated that prewarming for 15 minutes before and after epidural analgesia is effective in preventing perioperative hypothermia. However, prewarming during regional procedure without using additional time has not been reported previously. There have been some controversial results about optimal duration of prewarming. Although some experiments have Error bars indicate SD of temperature readings at each time. Baseline: Temperature upon arrival at the operation room, T0-60: Temperature immediately to 60 min after induction of general anesthesia (checked every 15 minutes), T End : Temperature at the end of surgery, P1: Temperature upon arrival at post-anesthetic care unit (PACU), P2: Temperature at 30 min after admission to PACU. Body temperatures were measured by esophageal thermometer intraoperatively and by tympanic thermometer at baseline and in PACU. There were significant differences in perioperative body temperatures between two groups (*P < 0.05, † P < 0.001, compared with group C). revealed that 30 minutes of prewarming is needed to gain heat content exceeding the amount of redistribution [14], another study has shown that prewarming for 20 minutes can reduce the risk of perioperative hypothermia [18]. Jo et al. [19] have conducted controlled trial and found that prewarming for 20 minutes cannot reduce the incidence of intraoperative hypothermia. However, it has effect on the severity of hypothermia. Although the duration of prewarming was different among our patients, our results showed that short time (median duration of ISBPB procedure in group F: 14 min, range: 9-23 min) of prewarming was effective in preventing perioperative hypothermia. Therefore, it would be important to perform prewarming although the prewarming time during ISBPB procedure before general anesthesia is short. Lim et al. [20] have shown that ISBPB procedure can reduce the risk of perioperative hypothermia caused by anestheticimpaired thermoregulation by decreasing the requirement of general anesthetics. The temperature of our patients could have been higher than that in the study of Lim et al. [20] due to prewarming and intraoperative warming in our study. As a limitation of our study, there was difference of the duration of ISBPB procedure for each patient. However, the difference between the two groups was not statistically significant. Although temperature of tympanic membrane was measured preoperatively and postoperatively, esophageal temperature was measured only intraoperatively due to restriction of surgical position. Duration of anesthesia and surgery and the amount of irrigation fluids were different between two groups although statistically insignificant. Duration of hypothermia was not recorded because body temperature recovery (> 36°C) in PACU was not evaluated. The temperature of patients in this study was higher than that in another study [6] regardless of whether relatively large amount of irrigation fluid was used. This might be due to the fact that we used intraoperative forced-air warming for all patients in this study. Further studies are needed to address the effect of the amount of irrigation fluid on temperature of patients. In previous studies, warmed irrigation fluid and forced-air warming during surgery have been found to be ineffective in preventing core temperature decrease [6,12]. In addition, intraoperative forced-air warming is only effective at 60 minutes after surgery [12]. Therefore, raising peripheral temperature in advance might be meaningful in preventing core temperature decrease due to body heat redistribution. In conclusion, short-term prewarming using forced-air warming device during ISBPB procedure in arthroscopic shoulder surgery has beneficial effect on perioperative hypothermia.
2019-03-17T13:08:10.570Z
2017-10-31T00:00:00.000
{ "year": 2017, "sha1": "c68e4b3388ea24753e751fea9031c67908115c5d", "oa_license": "CCBYNC", "oa_url": "https://www.anesth-pain-med.org/upload/pdf/APM-12-388.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "efba85801a12b27ef77db5977031637e84e252c0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257090469
pes2o/s2orc
v3-fos-license
FUS-dependent loading of SUV39H1 to OCT4 pseudogene-lncRNA programs a silencing complex with OCT4 promoter specificity The resurrection of pseudogenes during evolution produced lncRNAs with new biological function. Here we show that pseudogene-evolution created an Oct4 pseudogene lncRNA that is able to direct epigenetic silencing of the parental Oct4 gene via a 2-step, lncRNA dependent mechanism. The murine Oct4 pseudogene 4 (mOct4P4) lncRNA recruits the RNA binding protein FUS to allow the binding of the SUV39H1 HMTase to a defined mOct4P4 lncRNA sequence element. The mOct4P4-FUS-SUV39H1 silencing complex holds target site specificity for the parental Oct4 promoter and interference with individual components results in loss of Oct4 silencing. SUV39H1 and FUS do not bind parental Oct4 mRNA, confirming the acquisition of a new biological function by the mOct4P4 lncRNA. Importantly, all features of mOct4P4 function are recapitulated by the human hOCT4P3 pseudogene lncRNA, indicating evolutionary conservation. Our data highlight the biological relevance of rapidly evolving lncRNAs that infiltrate into central epigenetic regulatory circuits in vertebrate cells. Scarola et al. identify a conserved OCT4 pseudogene mechanism of action and demonstrate that the OCT4 pseudogene lncRNA is required for FUS-dependent loading of the SUV39H1 histone methyltransferase to the promoter of the parental OCT4 gene. P seudogenes are non-functional gene copies that have lost protein coding potential. Precise annotation and integration of functional genomics data revealed a high number of pseudogenes that have evolved to new functional elements, producing long noncoding RNAs (lncRNAs) in a tightly controlled manner 1,2 . In many cases, sequence similarity of pseudogene derived lncRNAs with parental gene transcripts provides the rational basis for pseudogene dependent control of ancestral gene expression. Pseudogene lncRNAs have been reported to compete with parental gene transcripts for miRNAs or RNA binding proteins or, alternatively, can give rise to endo-siRNAs [3][4][5][6][7][8] . Antisense transcription of pseudogenes can mediate epigenetic silencing of ancestral genes in trans, presumably by pairing with ancestral sense gene transcripts 9,10 . Remarkably, pseudogene derived lncRNAs have also been demonstrated to act as scaffold for chromatin modifying complexes that can modulate gene expression at multiple loci across the genome 11,12 . We recently reported on a new mechanism of ancestral gene regulation that depends on pseudogene lncRNA dependent recruitment of an epigenetic silencing complex to the Oct4 promoter in trans 17 . Induction of mESC differentiation results in efficient upregulation of the X-linked mOct4P4 gene that encodes the mOct4P4 lncRNA. The resulting nuclear restricted mOct4P4 lncRNA forms a complex with the HMTase SUV39h1 and targets H3K9me3 and HP1 to the promoter of the parental Oct4 gene on chromosome 17, leading to gene silencing in trans. Importantly, this mechanism does not involve pairing of Oct4 sense and pseudogene antisense RNAs. To this end, lncRNA sequence determinants and evolutional importance for mOct4P4 pseudogene lncRNA dependent silencing of Oct4 are not known. Here, we show that the human POU5F1P3 pseudogene derived lncRNA, hOCT4P3, is a functional homolog of the murine Pou5f1P4 lncRNA in OVCAR-3 ovarian cancer cells, demonstrating evolutionally constraint on pseudogene-lncRNA-mediated epigenetic silencing of OCT4. Performing mOct4P4 lncRNA pulldown experiments and a mOct4P4 lncRNA deletion analysis we demonstrate that the RNA binding protein FUS and a 200 nucleotide mOct4P4/hOCT4P3 region are essential for Oct4/ OCT4 silencing in mouse and human cells. Binding of FUS to endogenous, full length mOct4P4/hOCT4P3 lncRNAs allows subsequent binding of SUV39H1 to the 200-nucleotide lncRNA element, forming a silencing complex with target specificity for the parental Oct4/OCT4 promoter. In experimental cell lines, the 200nt mOct4P4/hOCT4P3 lncRNA sequence element is sufficient to guide SUV39H1 dependent Oct4/OCT4 silencing, even in the absence of FUS. We thus propose a model where FUS represents a licensing factor that mediates the accessibility of the 200 nucleotide mOct4P4/hOCT4P3 to SUV39H1 binding, thereby imposing target specificity of the silencing complex towards the parental Oct4/OCT4 gene promoter. Our data highlight the evolutionary relevance of pseudogene lncRNA mediated control of parental gene expression and the role of FUS in instructing the formation of an epigenetic regulatory complex with target site specificity defined by a lncRNA component. Results Conserved role of hOCT4P3 and mOct4P4 in silencing parental gene expression. We recently demonstrated that the mouse mOct4P4 lncRNA-SUV39H1 complex targets conserved promoter elements of the ancestral Oct4 gene in trans, mediating gene silencing during mESC differentiation. To support the relevance of pseudogene lncRNA mediated epigenetic regulation of parental gene expression we tested whether this mechanism is conserved in human cells. To date, eight human POU5F1 pseudogenes have been annotated in the human genome 25 . Similar to mOct4P4, the human hOCT4P1, hOCT4P3, and hOCT4P4 pseudogenes have an exon structure that is similar to the OCT4 mRNA and show 81%, 82%, and 82% overall sequence identity to OCT4, respectively 25 . We previously showed that OCT4 is frequently expressed in ovarian cancer cell lines and controls cancer relevant pathways in OVCAR-3 cells 15 . This identifies OVCAR-3 ovarian cancer cells as ideal model system to validate conservation of pseudogene lncRNA mediated silencing of parental OCT4. hOCT4P3 lncRNA displays high sequence similarity to mOct4P4 and reproduces nuclear localization pattern in a series of human ovarian cancer cell lines (Fig. 1a, b) 25 . Stable overexpression of hOCT4P3 in OVCAR-3 cells leads to reduced OCT4 expression and downregulation of the self-renewal transcription factors SOX2, NANOG, and KLF4, indicative for impaired self-renewal circuits (Fig. 1c). Quantitative real-time polymerase chain reaction (RT-PCR) experiments revealed that hOCT4P3 and OCT4 transcript levels are 130-or 150-fold lower than the housekeeping gene DAXX. This indicates that, although present at low copy number, hOCT4P3 has an important role in parental gene expression control ( Supplementary Fig. 1a). To demonstrate conservation of hOCT4P3 and mOct4P4 function we used the CRISPR/dCas9-HAKRAB system to silence hOCT4P3 or mOct4P4 lncRNA expression in OCVAR-3 or mESC cells, respectively. We first generated mESC and human OVCAR-3 ovarian cancer cell lines stably expressing an HA-tagged version of a catalytically dead Cas9 version fused to the Kruppel associated box (dCas9-HAKRAB; dCas9 empty cells). In a subsequent step dCAS9 empty cells were stably transfected with an expression vector encoding short-guide RNAs (sgRNAs) that locate dCas9-HAKRAB to the promoter region of the Pou5f1P4/POU5F1P3 genes (dCAS9 s-gOct4P4 mESCs or dCAS9 sgOCT4P3 OVCAR-3 cells). Expression of dCAS9-HAKRAB and respective sgRNAs in experimental mESCs and OVCAR-3 cells was validated by western blotting and RT-PCR (Fig. 1d). We previously demonstrated that mOct4P4 is efficiently upregulated during in vitro mESC differentiation 17 . Here, we used embryoid body (EB) differentiation as model system to address the impact of reduced mOct4P4 lncRNA expression on self-renewal and early differentiation markers. dCAS9 empty and dCAS9 sgOct4P4 mESCs were cultivated in hanging drop cultures in the absence of the self-renewal factor leukemia inhibitory factor (see "Methods"). We found that upregulation of mOct4P4 expression was strongly impaired during EB differentiation of dCAS9 sgOct4P4 mESCs (Fig. 1e). This effect was paralleled by inefficient Oct4/OCT4 silencing during 10 days of EB differentiation on the RNA and protein level (Fig. 1f, Supplementary Fig. 1b). Accordingly, we found increased expression of self-renewal transcription factors Sox2, Nanog, and Gdf3 and reduced expression of early differentiation markers Fgf5 and Nestin (Fig. 1g). On the functional level, dCAS9 sgOct4P4 embryoid bodies showed poor formation of contractile cardiomyocyte structures, indicative for in vitro differentiation defects (Fig. 1h, Supplementary Fig. 1c, Supplementary Movies 1 and 2). Importantly, reduced expression of human hOCT4P3 in dCAS9 sgOCT4P3 OVCAR-3 cells was paralleled by increased expression of OCT4 at the RNA and protein level (Fig. 1j, k). This effect was paralleled by reduced H3K9me3 at conserved elements at the promoter of the parental OCT4 gene (Fig. 1l). Based on our loss and gain of function experiment, we conclude that hOCT4P3 recapitulates mOct4P4 function in human COMMUNICATIONS BIOLOGY | https://doi.org/10.1038/s42003-020-01355-9 ARTICLE COMMUNICATIONS BIOLOGY | (2020) 3:632 | https://doi.org/10.1038/s42003-020-01355-9 | www.nature.com/commsbio OVCAR-3 cells. Importantly, data from dCAS9-HAKRAB loss of function models also demonstrate that endogenous mOCT4P4 and hOCT4P3 lncRNAs have a suppressive action on the Oct4/OCT4 promoter in mESCs and OVCAR-3 cells. Our results demonstrate the evolutionary conservation of H3K9me3 dependent silencing of parental Oct4/OCT4 by mouse and human mOct4P4 and hOCT4P3 sense lncRNAs. This further implies the existence of defined lncRNA sequence elements essential for site specific targeting of SUV39H1 to the Oct4/OCT4 promoter. A deletion analysis identifies mOct4P4 lncRNA regions essential for Oct4 silencing. The MS2 RNA tagging system enabled us to demonstrate that a mOct4P4 lncRNA-SUV39H1 complex locates to the promoter of the ancestral Oct4 gene in trans 17 . In order to identify lncRNA regions essential for mOct4P4 function we used a mESC cell line stably expressing a flag-tagged version of the MS2 phage coat protein (MS2-flag mESCs) as well as mOct4P4 deletion constructs that were tagged with 24 repeats of the MS2 RNA stem loop motif (Fig. 2a, Supplementary Fig. 2a). To ensure nuclear localization, ectopically expressed lncRNAs contained mOct4P4 regions corresponding to the 5′ and 3′ UTR regions of parental Oct4, previously shown to determine nuclear restriction of the endogenous mOct4P4 lncRNA (Fig. 2a) 17 . We next evaluated the ability of mOct4P4-24xMS2 deletion construct derived lncRNAs to (i) tether the flag-tagged MS2 phage coat protein to the Oct4 promoter and (ii) trigger increased H3K9me3 levels at the Oct4 promoter. Anti-flag ChIP experiments revealed that only MS2 RNA tagged full length, Δ200 and Δ400 Oct4P4-24xMS2 lncRNAs were able to locate the flagtagged MS2 protein to the promoter of the ancestral Oct4 gene and to trigger a local increase of H3K9me3 (Fig. 2f, g, Supplementary Fig. 2b). Accordingly, MS2 RNA tagged Oct4P4 lncRNA versions that failed to suppress Oct4 expression (Δ600, Δ800, Δ994; 5′ + 3′; Fig. 2d, e) were unable to locate flag-tagged MS2 and H3K9me3 to the Oct4 promoter ( Fig. 2f, g). Of notice, ectopically expressed full length mOct4P4-24xMS2 lncRNA was exclusively recruited to the Oct4 promoter but not to the promoters of Daxx, H2Q10, Ceher1, Pp1r18, and Rab5A genes that are localized up-and downstream of Oct4 on chromosome 17 ( Supplementary Fig. 2c). Together, this indicates that a 200 nucleotide sequence spanning position 984-1183 of the mOct4P4 lncRNA has a central role in orchestrating target site specific epigenetic silencing of the ancestral Oct4 gene in trans. We conclude that mOct4P4 pseudogene lncRNA contains two regions with an essential role in silencing of the ancestral Oct4 gene: (i) 5′ and 3′ located sequences to ensure nuclear lncRNA and (ii) region 984-1183 that directs H3K9me3 to the Oct4 promoter. Fig. 1 Conserved function of hOCT4P3 and mOct4P4 lncRNAs. a Schematic representation of murine mOct4P4 and human hOCT4P3 pseudogenes. Length of sequence elements and percentage of sequence homology are indicated. Gray boxes, sequences with homology to Oct4/OCT4 5′UTR; gray lines, sequences with homology to Oct4/OCT4 3′UTR. A centrally located, 334-bp spliced fragment is exclusively present in mOct4P4 (29). b Subcellular localization of hOCT4P3 in human Ovarian Cancer cell lines OVCAR-3, SKOV3, TOV-112D, and CAOV3 as determined by quantitative RT-PCR (qRT-PCR). Shown values refer to the percentage of total RNA expression. c Quantitative RT-PCR analysis of hOCT4P3 (left panel), OCT4 and pluripotency marker genes (right panel) in OVCAR-3 cells stably expressing hOCT4P3. Expression levels were normalized against ACTIN. d dCas9-HA-KRAB western blotting analysis (top) and RT PCR analysis (bottom) of Oct4 pseudogene guide RNA (sgOct4P4, sgOCT4P3) in mouse embryonic stem cells (mESCs) (left panel) and OVCAR-3 cells (right panel). ACTIN and Gapdh were used as control. e, f mOct4P4 lncRNA (e) and Oct4 (f) expression in self-renewing mESCs (EB T0) and during 10 days of embryoid body (EB) differentiation (EB D3-D10). Expression levels were normalized to Gapdh. g qRT-PCR analysis of self-renewal marker genes (left panel) or markers of early mESC differentiation (right panel) in dCas9/sgOct4P4 mESCs. Expression values were normalized against gapdh. h Percentage of contractile cardiomyocyte structures in embryoid bodies (EBs) obtained from dCas9 or dCas9/sgOct4P4 cells. i, j Quantitative RT-PCR showing hOCT4P3 lncRNA (i) and OCT4 (j) expression in dCas9 or dCas9/sgOCT4P3 OVCAR-3 cells. Expression values were normalized using ACTIN. k OCT4 expression in knockdown dCas9 and dCas9/sgOCT4P3 OVCAR-3 cells as determined by western blotting. ACTIN was used as control. Numbers represent OCT4/ACTIN ratio (dCAS9 empty was set "100"). l Chromatin immunoprecipitation (ChIP) analysis on the OCT4 promoter region in dCas9 and dCas9/sgOCT4P3 OVCAR-3 cells using H3K9me3 antibodies. Error bars represent standard deviation; Precise p values are indicated; n number of independent experiments carried out. FUS interacts with endogenous mOct4P4 to allow parental Oct4 gene silencing. In order to obtain additional insights into the mechanism of mOct4P4 lncRNA mediated silencing of Oct4 we aimed to identify mOct4P4 lncRNA interacting proteins. MS2-flag cells expressing full-length mOct4P4-24xMS2 and control mESCs expressing only a 24xMS2 stem loop control RNA were used to perform anti-flag RNA immunoprecipitation (RIP) experiments. Obtained control and mOct4P4-24xMS2 RNA-immunoprecipitates where run on denaturing polyacrylamide gels. After Coomassie staining, protein bands specifically appearing in eluates from mOct4P4-24MS2 RIPs were cut out from the gel and subjected to mass spectrometry (Fig. 4a). Flag-tagged MS2 as well as an additional set of proteins were shown to be specifically over-represented in analyzed protein bands obtained from mOct4P4 lncRNA RIP eluates (Fig. 4a, Supplementary Table 1a, Supplementary Data 1). Given the reported involvement in gene silencing, we focused our interest on the RNA and DNA binding protein FUS 28,29 . In addition to transcriptional regulation, FUS has been demonstrated to be involved in DNA repair, alternative splicing, transcriptional regulation, RNA localization and stress granules 30 . FUS translocation events and mutations have been linked with liposarcoma and amyotrophic lateral sclerosis, respectively [31][32][33] . Validation of RIP eluates by western blotting and RT-PCR confirmed interaction of FUS with the full length mOct4P4 lncRNA (Fig. 4b). We were also able to detect mOct4P4-24xMS2 lncRNA as well as MS2-flag protein in the eluates from anti-FUS RIP experiments, corroborating FUS-Oct4 pseudogene lncRNA interaction (Fig. 4c). We previously showed that the mOct4P4 lncRNA is essential to maintain SUV39H1-dependent silencing of parental Oct4 in primary mouse embryonic fibroblasts (pMEFs), indicating that persistent localization of the mOct4P4 lncRNA at the Oct4 promoter is essential to maintain Oct4 silencing in differentiated cells 17 . To test whether mOct4P4 lncRNA is required for the localization of FUS to the Oct4 promoter we performed ChIP experiments in mOct4P4 lncRNA knock-down pMEFs. Our results show that loss of endogenous mOct4P4 lncRNA displaced FUS from the Oct4 promoter in pMEFs (Fig. 4f). Accordingly, siRNA mediated depletion of FUS from pMEFs significantly increased Oct4 mRNA expression, recapitulating the effect of mOct4P4 knockdown on parental gene expression (Fig. 4g, h). This effect was paralleled by increased expression of self-renewal transcription factors Sox2, Nanog and Klf4 (Fig. 4i). We conclude that FUS is essential for the initiation and maintenance of mOct4P4 lncRNA mediated silencing of Oct4 in order to suppress self-renewal circuits in differentiated mouse cells. We found that the SUV39H1 protein co-immunoprecipitated with the full-length mOct4P4-24xMS2 and 200 bp-mOct4P4-24xMS2 lncRNAs, but not with −200 bp-mOct4P4-24xMS2 lncRNA (Fig. 5a). Interestingly, all types of ectopically expressed mOct4P4 lncRNAs versions bound FUS in RIP experiments, suggesting that FUS binds multiple mOct4P4 lncRNA regions (Fig. 5b). In contrast, mOct4P4-SUV39H1 interaction critically depends on the presence of the 200 nucleotide motif. Notably, we did not find evidence for direct interaction of SUV39H1 and FUS in co-immunoprecipitation assays ( Supplementary Fig. 3a, b). In addition, we did not find SUV39H1 peptides in our mass spectrometry data from mOct4P4-24xMS2 lncRNA pull down experiments (Supplementary Data 1). This is in line with a lack of SUV39H1 in published data on the FUS interacting proteome [34][35][36][37][38] . We conclude that direct SUV39H1-FUS interaction is not a prerequisite for silencing complex formation. In a second step we transiently depleted FUS from experimental cells and performed anti-SUV39H1 RIP experiments followed by mOct4P4 specific RT-PCR. We found that loss of FUS abolishes SUV39H1 binding to the full length mOct4P4 lncRNA (Fig. 5d, Supplementary Fig. 3d). Strikingly, binding of SUV39H1 to the 200 bp-mOct4P4-MS2 lncRNA (mOct4P4 positions 984-1183) does not require FUS (Fig. 5d). This indicates that binding of FUS to the full-length mOct4P4 lncRNA plays an important role in providing access for SUV39H1 to the 200 nucleotide region. However, in the context of reduced lncRNA sequence complexity of the 200 bp-mOct4P4-MS2 construct, the critical 200 nucleotide region appears to be directly accessible to SUV39H1, rendering the action of FUS dispensable. Oct4 mRNA and mOct4P4 lncRNA share high sequence identity levels, raising the question as to whether SUV39H1 and FUS may also interact with the endogenous Oct4 mRNA. Importantly, RIP experiments using mESCs demonstrated that under our experimental conditions SUV39H1 and FUS display binding specificity towards mOct4P4 lncRNA but not Oct4 or other mRNAs such as Sox2, Nanog, Gapdh, or Actin (Fig. 5e, f). This demonstrates that sequence degeneration after mOct4P4 pseudogene formation resulted in the formation of binding sites for FUS and SUV39H1, conferring a new biological function to the mOct4P4 lncRNA. On the mechanistic level, our data indicate that FUS has a critical role in supporting the interaction of SUV39H1 with full length mOct4P4 lncRNA, suggesting that FUS licenses the formation of a functional SUV39H1-mOct4P4 lncRNA complex in mESCs. FUS mediates targeting of SUV39H1 by mOct4P4 lncRNA to the Oct4 promoter. We next wished to investigate how lncRNA: protein binding requirements translate into site specific targeting of a SUV39H1 containing silencing complex to the Oct4 promoter. We first validated whether FUS has a role in directing mOct4P4 lncRNA and SUV39H1 to the Oct4 promoter. Oct4 expression values were normalized against Gapdh (d) or ACTIN (e). Shown numbers represent OCT4/ACTIN ratio as mean of three independent experiments (control was set "100") (e). f, g ChIP analysis of Oct4 promoter region in mESCs stably overexpressing indicated constructs and using described antibodies. qRT-PCR was performed to measure promoter enrichment. Only mOct4P4 and 200 bp-mOct4P4 constructs localize to the Oct4 promoter (f) and drive H3K9me3 enrichment (g). Error bars represent standard deviation. Precise p values are indicated. n: number of independent experiments carried out. Importantly, performing anti-flag ChIP we found that siRNA mediated depletion of Fus does not impair the localization of the 200 bp-mOct4P4-24xMS2 lncRNA version to the Oct4 promoter of experimental mESCs (Fig. 6d). Accordingly, 200 bp-mOct4P4 overexpression results H3K9me3 enrichment at the Oct4 promoter and a reduction of OCT4 protein expression in control but also Fus knockdown mESCs (Fig. 6e, f). Thus, FUS is dispensable for parental Oct4 silencing in the context of the minimal sufficient 200 nucleotide mOct4P4 construct. However, in context of the increased sequence complexity of endogenous, full-length mOct4P4, FUS is essential to license the interaction between SUV39H1 and mOct4P4 to allow the formation of a silencing complex with Oct4 promoter target-specificity. To further dissect requirements for Oct4 promoter targeting we evaluated the relevance of SUV39H1 for targeting FUS and mOct4P4 lncRNA to the parental Oct4 gene. Anti-FUS ChIP experiments revealed that siRNA mediated depletion of Suv39h1 delocalizes FUS from the Oct4 promoter in mESCs ectopically expressing full length mOct4P4 or the 200 bp-Oct4P4 lncRNA (Fig. 6g). Importantly, siRNA mediated knockdown of Suv39h1 abrogates the localization of full length mOct4P4-24xMS2 but also 200 bp-mOct4P4-24xMS2 lncRNA versions to the promoter of the ancestral Oct4 gene, as demonstrated by anti-flag ChIP. This effect was linked with impaired imposition of H3K9me3 to the Oct4 promoter and loss of parental Oct4 silencing in both experimental cell lines ( Fig. 6h-k, Supplementary Fig. 4). These data highlight that FUS is essential to instruct the loading of the repressive SUV39H1 HMTase to the critical 200 mOct4P4 lncRNA nucleotide region. This FUS dependent step is central to program target specificity of SUV39H1, towards the promoter of the parental Oct4 gene. Functional conservation of a FUS-SUV39H1-OCT4 pseudogene lncRNA silencing complex. After identifying critical players for mOct4P4 function we set out to test whether all critical mechanistic steps are conserved in human OVCAR-3 cells. We first generated OVCAR-3 cell lines stably transfected with an expression vector encoding 24xMS2 tagged full-length hOCT4P3 (hOCT4P3-24xMS2) or a 24xMS2 tagged hOCT4P3 lncRNA region (200 bp-hOCT4P3-24xMS2) that corresponds to the functional relevant 200 nucleotide mOct4P4 region (Fig. 7a, Supplementary Fig. 5a). Functional experiments were carried out after transiently transfecting experimental cell lines with an expression vector encoding flag-tagged MS2. ChIP experiments using anti-flag and anti-H3K9me3 specific antibodies showed that the hOCT4P3-24xMS2 lncRNA localizes the flag-tagged MS2-protein to the promoter of the ancestral OCT4 gene, triggering a local increase in H3K9me3 (Fig. 7e, f). In line with this, western blotting and RT-PCR on protein and RNA fractions from anti-flag RIP eluates revealed that SUV39H1 and FUS co-immunoprecipitate with full length hOCT4P3-24xMS2 lncRNA (Fig. 7g, h). We conclude that all aspects of mOct4P4 function are recapitulated by hOCT4P3 in human cells. This demonstrates that pseudogene lncRNA dependent silencing of Oct4/OCT4 represents an evolutionary conserved mechanism to fine-tune the expression of the parental Oct4/OCT4 gene. On the mechanistic level we propose a model where FUS binding to the endogenous mOct4P4/hOCT4P3 lncRNA plays an important role in rendering the 200-nucleotide region accessible for SUV39H1 binding. This step is essential to license the formation of a SUV39H1 HMTase containing silencing complex with programmed target specificity towards the parental Oct4/OCT4 promoter (Fig. 8). Discussion Here, we investigate the molecular mechanism and evolutionary conservation of Oct4/OCT4 pseudogene lncRNA mediated control of parental gene expression. Repression of hOCT4P3 or mOct4P4 lncRNA expression in human OVCAR-3 or mESCs using the CRISPR/dCas9-HAKRAB system resulted in loss of H3K9me3 at the OCT4/Oct4 promoter and elevated OCT4/Oct4 expression levels (both at RNA and protein levels) in human or Fig. 4 FUS is required for mOct4P4 lncRNA-mediated silencing of Oct4 in mESCs. a Silver stained protein gel of eluates obtained from mOct4P4-24xMS2 anti-flag RIP experiments. mESCs expressing flag-MS2 and mOct4P4-24xMS2 were used. Indicated bands specifically elute from mOct4P4-24xMS2 lncRNA. Protein identity was determined by mass spectrometry (Supplementary methods). b Anti-flag RIP using mESCs expressing MS2-flag/full length mOct4P4-24xMS2 or 24xMS2 RNA control cells using anti-flag antibody. Agarose gel electrophoresis after quantitative RT-PCR demonstrates the presence of mOct4P4-24xMS2 stem loop RNA (bottom panel). Detection of FUS and MS2-flag proteins by Western blotting (top and middle panel respectively). Bands analyzed by mass spectrometry are indicated as numbers (1)(2)(3)(4)(5)(6); complete data on protein identification is available in the provided Supplementary Data 1. c Anti-FUS RIP using MS2-flag mESCs expressing full length mOct4P4-24xMS2 or 24xMS2 RNA control. Presence of FUS and flag-MS2 in eluates was validated by western blotting (top and middle panel respectively). Quantitative RT-PCR followed by agarose gel electrophoresis verified the presence of mOct4P4-24xMS2 in anti-FUS RIP experiments (bottom panel). d FUS and OCT4 western blotting using eluates from mOct4P4-24xMS2 or 24xMS2 mESCs transiently transfected with the indicated siRNAs. ACTIN was used as loading control. Numbers represent OCT4/ACTIN ratio as mean of three independent experiments (24xMS2-CTRL siCTRL was set "100"). e, f ChIP analysis of Oct4 promoter region using an anti-FUS antibody in control or FUS knockdown mESCs (e) or pMEFs (f). Eluates were analyzed by qRT-PCR. g Fus and mOct4P4 expression levels in pMEFs transiently transfected with indicated siRNAs, as determined by qRT-PCR. Expression levels were normalized to Gapdh. h, i qRT-PCR analysis using pMEF cells subjected to siRNAmediated knockdown of mOct4P4 and Fus. Expression values for Oct4 (h) or self-renewal markers (i) were normalized against Gapdh. Error bars represent standard deviation. Precise p values are indicated. n number of independent experiments carried out. mouse cells, respectively. This indicates functional conservation of Oct4 pseudogene lncRNA mediated silencing of parental gene expression in mouse and human cells. High overall sequence identity and conservation of mOct4P4 function in human cells suggested the existence of functionally relevant lncRNA regions. A deletion analysis identified a 200-nucleotide region in mOct4P4 and hOCT4P3 lncRNA that is required for targeting of the lncRNA-SUV39H1 silencing complex to the promoter of the ancestral Oct4/OCT4 gene, resulting in local H3K9 trimethylation. Binding of Oct4/OCT4 pseudogene lncRNA by SUV39H1 is in line with studies demonstrating interaction of SUV39H1 HMTases with pericentric RNAs, telomere repeat containing RNA (TERRA), LINE1 L1MdA 5′UTR elements, SINE B1 repeats and pRNAs of the rRNA cluster [39][40][41][42] . Direct interaction of mOct4P4 lncRNA with SUV39H1 was recently demonstrated by in vitro EMSA experiments (37). SUV39H1 HMTase-RNA-binding specificity is reported to be promiscuous and characterized by low sequence specificity. This lead to the hypothesis that the formation of lncRNA-SUV39H HMTase complexes with defined epigenetic function may depend on additional proteins or the presence of physiologically functional RNA:chromatin templates 43,44 . RNA pull-down experiments revealed a series of mOct4P4 lncRNA interacting proteins with a potential role in silencing parental Oct4. Here, we demonstrate that the RNA binding protein FUS has a critical role in Oct4/OCT4 lncRNA mediated silencing of OCT4. Loss of FUS prevents the formation of a full length mOct4P4/hOCT4P3 lncRNA-SUV39H1 silencing complex, abrogating the initiation and maintenance of Oct4/OCT4 silencing. Notable, FUS is dispensable for the function of the minimal sufficient mOct4P4/hOCT4P3 lncRNA version (200 bp-mOct4P4; 200 bp-hOCT4P3). Thus, we conclude that FUS does not have a central role in closing the Oct4/OCT4 promoter. We propose that FUS is critical for the structuring the long Oct4 pseudogene lncRNA template to allow the binding of SUV39H1 to the 200-nucleotide region, thereby defining a specialized SUV39H1-lncRNA complex with selective target specificity towards the parental Oct4/OCT4 promoter. Importantly, FUS and SUV39H1 do not bind to the Oct4 mRNA in RIP experiments. This demonstrates that the specific interaction with FUS and the noncoding RNA-guided SUV39H1 HMTase represents a new biological feature of Oct4P4/OCT4P3 lncRNAs, that was acquired during pseudogene evolution. Future experiments will have to validate whether FUS has a more general role in epigenetic gene regulation by controlling the association of lncRNAs with epigenetic writers. In addition, the impact of Oct4/OCT4 promoter associated pseudogene transcripts on transcriptional initiation and Oct4/OCT4 promoter evasion remains an interesting issue to be addressed. In contrast to the selective requirement of FUS for full length pseudogene lncRNA function, we found that SUV39H1 is essential for targeting of both, the full-length and 200 nucleotide mOct4P4/hOCT4P3 lncRNA versions to the Oct4/OCT4 promoter. Thus, after FUS dependent silencing complex formation, SUV39H1 and the 200 nucleotide mOct4P4/hOCT4P3 lncRNA regions hold the information for selective targeting and epigenetic silencing of the parental Oct4/OCT4 gene promoter. The requirement of FUS as critical factor to license endogenous mOct4P4/hOCT4P3 lncRNA function may also represent a regulatory mechanism that restricts pseudogene-lncRNA mediated silencing to a defined biological context. Along these lines, PRMT1 dependent arginine methylation of FUS was recently shown to prevent the interaction with the CCND1 gene promoter-associated noncoding RNA-D (pncRNA-D), thereby blocking the repression of the HAT activity of the CBP/p300 HAT complex 28,29 . Addressing post-translational modifications of FUS may identify windows of mOct4P4/hOCT4P3 function in development and disease. In addition to mOct4P4/hOCT4P3 also other pseudogene derived lncRNAs, such as DUXAP8 and DUXAP10 have been shown to interact with epigenetic writers 12,45,46 . However, DUXAP lncRNAs rather act as general scaffold for epigenetic regulatory complexes that do not selectively target the parental DUXA gene. In contrast, pseudogene PTENP1 antisense transcripts drive DNMT1 dependent silencing of the parental PTEN gene by paring with the 5′UTR of the nascent, sense PTEN RNA 9,11 . We experimentally validated that Oct4 and mOct4P4 are exclusively transcribed in sense orientation, thus excluding extended RNA:RNA interactions 17 . Thus, mOct4P4 and hOCT4P3 represent pseudogene sense lncRNAs that use a conserved mechanism to target and remodel the chromatin status of the parental gene promoter, located on a different chromosome. Altogether, we propose a four-step model: (i) FUS binds mOct4P4/hOCT4P3 to (ii) allow SUV39H1 binding to the 200 nucleotide region, followed by (iii) sequence specific targeting of the Oct4/OCT4 promoter, resulting in (iv) increasing local H3K9me3 and HP1 levels and Oct4/OCT4 silencing (Fig. 8). The specific binding of SUV39H1 to H3K9me3 is anticipated to contribute to the maintenance of local heterochromatin structure at the Oct4/OCT4 promoter 40,41 . Silencing of Oct4/OCT4 in trans may depend on complex longrange chromatin interaction of involved (pseudo)gene-loci, alternative DNA structures or the recruitment of additional factors. Elucidating mechanisms that functionally connect pseudogenes loci with ancestral genes will provide new insights into the power of pseudogenes encoded lncRNAs in fine-tuning the expression of ancestral genes in development and disease. Statistics and reproducibility. A one-tailed t test was performed to calculate p values and statistical significance was set at p < 0.05. Each finding was confirmed by three independent biological replicates, unless differently specified. Error bars represent standard deviation. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All data generated or analyzed during this study are included in this published article and related Supplementary information files. Source data of blots and gels are shown in Supplementary Fig. 6.
2023-02-23T15:41:00.976Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "742a244cb6bcf9347af9dc626a3e8ea48008176a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42003-020-01355-9.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "742a244cb6bcf9347af9dc626a3e8ea48008176a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
254666485
pes2o/s2orc
v3-fos-license
Sex and Rural/Urban Centre Location as Determinants of Body Image Self-Perception in Preschoolers Body image and self-perception are highly related to psychological health and social well-being throughout the lifespan. Body image problems can lead to pathologies affecting the quality of life. Thus, it is essential to analyse perceived self-image from an early stage. This study aimed to assess body image and dissatisfaction in preschoolers, analyzing possible differences depending on sex (boy/girl) and school location (rural/urban). The sample consisted of 304 preschoolers from Extremadura (Spain) between three and six years of age. The Mann–Whitney U test was used to evaluate the differences in scores according to sex and centre location. The results showed significant differences in the body shape perception depending on the student’s sex, with females showing higher scores in their Body Mass Index (BMI). However, females showed greater body dissatisfaction than their male counterparts, with greater disagreement between their perceived and desired figures. Actions and programmes to promote children’s healthy body image need to be implemented with consideration for differences between the sexes. Introduction Positive embodiment and body appreciation are significant aspects of health and quality of life [1]. Body image describes how a person feels, thinks, and perceives their body [2], although this term is considered to be multidimensional [3]. The most commonly used body image measures are those that evaluate the person's appreciation of his or her physical appearance [4]. Although research on body image frequently adopts a pathologizing perspective, concentrating on body dissatisfaction, the importance of considering body appreciation and positive aspects of body image has been raised in recent years [5]. Scientific literature indicates that positive body esteem is associated with self-esteem, healthy eating habits, and higher physical activity levels regardless of sex [6,7]. Likewise, body image has been found to predict health and quality of life in both boys and girls [8]. Numerous studies have revealed several aspects of positive body image, such as positive opinions, respecting and feeling grateful for one's body, rejecting societal ideals of attractiveness, inner positivity influencing one's outward appearance, and having a broad conception of beauty [9]. In this line, surveys with girls and women have influenced the developmental theory of embodiment [10] which considers healthy body image as a condition of body-self unification, characterized by feeling "at one" with the body. In this sense, the ages at which these ideas start to develop may be learned through studies looking at potentially detrimental attitudes about weight and body size among youth. Some research suggests that children start to become conscious of their body image and how they feel about it at the age of three [11]. Moreover, findings suggest girls aged just three are already emotionally committed to the ideal of thinness [12]. Then, youngsters between three and six already see fat negatively and favour a lean body [13]. Spiel and colleagues discovered that children between three to five years of age preferred larger figures to symbolize negative traits as opposed to good traits [14]. Thus, preschoolers must be taught healthy attitudes toward their bodies, diet, and activity to prevent body image disorders and related pathologies [15]. On the one hand, a variety of sociocultural influences are linked to body image development [16]. Thompson et al. [17] indicate that media, friends, and parents are three elements that impact the emergence of body dissatisfaction. Parents have a significant impact on how their children see their bodies by modelling attitudes and behaviours linked to beauty [18]. Hence, research has also shown a connection between children's body dissatisfaction and peer influence, such as teasing, dialogues, or modelling [18]. Children's body dissatisfaction has also been linked to promotion of idealized bodies in media [19]. On the other hand, biological components are also significant for children's body image. Body mass index (BMI) has been linked to children's body image and usage of body modification strategies [16]. Another essential factor to consider is sex, as research has shown that males and girls may have different body worries [20]. In terms of body image assessment, methods used in research with preschoolers frequently call for neither reading proficiency nor much vocal participation from participants [21]. A commonly used technique, which has been tried with kids as young as two-and-a-half years old, is to display two to three line drawings of various sizes while reading a list of words [22]. The use of a figure rating scale in an interview setting where children are presented with age-and sex-appropriate pictorial representations of a range of body sizes, from extremely thin to extremely huge, is another [23]. Children are then asked to choose the images of themselves that they believe best represent their existing appearance (current) and their ideal appearance (ideal). Also, other tools that have been regularly used are questionnaires [24]. Consequently, the objective of the present research was to evaluate body satisfaction in preschoolers in public schools from Extremadura (Spain), assessing whether there were possible differences depending on sex or the educational centre location. This will make it possible to identify the current state of body image at the most critical ages of development, and subsequently make it possible to implement educational programs and/or lines of action depending on the studied variables. Participants Participants were selected using a non-probabilistic sampling method based on convenience sampling [25]. The sample consisted of 304 preschool children (from 3 to 6 years old) from public schools in Extremadura (Spain). Instruments A questionnaire with five sociodemographic questions (sex, grade, school environment and age) was prepared. The Preschoolers' Body Scale was used [26] (desired figure). Thus, to determine body dissatisfaction, the value of the desired figure scale is subtracted from the value of the perceived figure scale. Authors reported a Fleiss' Kappa (0.61) from the judgment of ten expert paediatricians and a test-retest correlation performed with children (ρfrontal = 0.40; ρlateral = 0.55). Procedure Hence, the database of public schools in the Autonomous Community of Extremadura (Spain) belonging to the Department of Education and Employment of the Regional Government of Extremadura was used to access the sample (available at: http://estadisticaeducativa. educarex.es/?centros/ensenanzas/&curso=17&ensenanza_centro=101200001 accessed on February 2022). Contact data were selected for those centres with the second stage of Early Childhood Education (3 to 6 years). An e-mail was then sent to the early childhood education teachers with information about the study and requesting their collaboration. The schools with an interest to participate were sent the informed consent form to be signed by the legal guardians. Then, a member of the research team went to every educational centre to collect data from the children. In the regular classroom, with both the researcher and the teacher present, the participants first filled in the socio-demographic questionnaire with the help of the teacher. The teacher indicated to the students whether they should mark rural or urban based on the characteristics of their school (previously agreed with the researcher based on whether the localities had more or less than 20,000 inhabitants. Those with less than 20,000 inhabitants were considered rural, as stated on the website of the Diputación de Cáceres (https://www.dip-caceres.es/, accessed on February 2022). Secondly, they were given The Body Scale for Preschoolers questionnaire. They were first given the first part of the questionnaire and they were tasked for approximately 5 min to think about and select the image that most resembled them. Then they were given the second part of the questionnaire and they were asked for 5 min to think about and select the image they would like to resemble (Appendix A). Statistical Analysis Thus, to analyse if the collected data's distribution met the assumption of normality, the Kolmogorov-Smirnov test was used, confirming the need to use nonparametric tests. The Mann-Whitney U test was used to analyze whether there were statistically significant differences between the figures perceived and the figures desired according to sex and centre location. Moreover, to analyze body dissatisfaction, perceived image and desired image data were treated independently for both the frontal and lateral scales and then the mean value obtained from the frontal scale and lateral scale was taken as a factor. Finally, Cronbach's Alpha was used to calculate the reliability of each of the scales of the instrument. Table 1 shows the sample distribution according to sex, grade, and centre setting. The mean age of the participants was 4.42 years (SD = 0.82). Table 2 shows the differences between the perceived (frontal and lateral) and desired (frontal and lateral) figures according to sex and centre location. Significant differences were found concerning the perceived figure according to sex, with girls obtaining higher values than boys. Table 3 shows descriptive data and differences in body dissatisfaction according to sex and centre location. Statistically significant differences were found according to sex, with girls obtaining higher body dissatisfaction than boys. No statistically significant differences were found in the scores obtained in the perceived figure, desired figure, and dissatisfaction as a function of the centre location. Table 3. Descriptive data and differences in body dissatisfaction according to sex and centre locati. Sex Centre Finally, the reliability of the results for each scale was 0.87 and 0.74 for the perceived and desired figure, respectively, all being satisfactory values above 0.70 according to Nunnally and Bernstein [27]. Discussion This study arises from the need to understand body image perception in preschoolers to know its current state in Extremadura (Spain) schools in order to develop intervention programs. Differences in body image perception concerning sex have been widely studied in the literature, yielding different results over time. In this study, perceived image scores were higher than those according to the ideal image, contrasting with another study in the same population [28]; although in the literature, as in this research, there is a preference for more linear figures [29]. Children at this age usually have a negative perception of larger figures mainly due to sociocultural factors [14], more pronounced in females [12], as boys prefer muscular bodies, selecting those figures that represent an intermediate score even at a young age [30]. In this line, Canadian girls of these ages' desire to be thinner are much higher than their male peers [13]. In contrast, children may select a slimmer or larger ideal figure than their own for reasons related to their body fat and musculature, or their desire for an adult form, which should not necessarily cause them to be worried about their current perceived size [31]. Other studies have reported no differences between sexes, such as Lowes and Tiggermann [32] with no differences in schoolchildren aged 3 to 5 years nor in the perceived or desired figure; or the one carried out by Pallan [33] finding no sex differences in self-rated body image and body dissatisfaction among South Asian preschoolers. Little literature analyzes the environment as a possible factor affecting body image at an early age. In this study, no significant differences in body image were found between educational settings despite the rural children usually showing higher physical activity levels [34]. In this regard, Williams and his group [35] analyzed the children in rural Appalachia's perceptions of their weight in comparison to their urban peers, finding no differences between groups. Gitau and associates [36] evaluated body image in African adolescent girls, reporting a greater preference for linear figures in girls living in urban areas. These results are in line with those obtained by Jackson [37], who discovered that, compared to rural females of the same age, a higher proportion of urban Egyptian girls between the ages of nine and eleven desired to be extremely skinny. Packard and Krogstrand [38] found that most of the study's rural participants had at least one weight concern and had engaged in dieting and desired to be thinner. The participants in our study showed higher values on body dissatisfaction, compared with another one conducted in a similar sample [39], although this could be explained by the non-inclusion of six-year-old schoolchildren, in whom sociocultural factors affect stronger body image [40]. Then, while some studies report body dissatisfaction in the majority of their participants [29], others report satisfaction levels above 50% [23]. This desire for a different body image is usually more linear [41], increases with age [42], and is more noticeable in girls [43]. Despite this, researchers such as Damiano [23] or Musher-Eizenman [44] have found that a high percentage of students, over 40%, indicated larger figures as ideal for them. In addition, this body dissatisfaction has been highly correlated with BMI, as students who are overweight or have obesity show higher levels of body dissatisfaction than those who are not [45]. Regarding the relationship between preschoolers' body dissatisfaction and centre location, there is little analysis in the scientific literature. Leite [46] stated that school location, whether urban or rural, was unrelated to body dissatisfaction among Brazilian students. However, Ferguson and Cramer [47] found that children from rural backgrounds showed higher self-esteem levels than those residing in urban areas in the Jamaican population. By contrast, Welch [48] found that urban students between the ages of 9 and 11 had a higher ideal body image and were more content with their bodies. This study has several limitations. Firstly, because of the convenience sampling used, findings must be interpreted with caution. Additionally, a limited number of sociodemographic factors is insufficient to deeply characterise the students considering the numerous environmental elements that impact children's development. Information about social network use or relationships with parents or teachers is related to body image formation at an early age [49][50][51]. Another limitation was not objectively assessing participants' body composition, so it would be interesting to consider objective methods for the body composition assessment. Conclusions The current study focuses on analyzing the self-perceived body image of preschool children from the region of Extremadura, allowing them to evaluate body dissatisfaction levels in the early stages of development. Girls generally see themselves with a higher body mass index than boys and show greater body dissatisfaction. Centre location does not seem to be a variable to be considered in generating differences in body image, at least, in the current research. Therefore, these results allow for the need to design and develop programs for body image in the early stages, considering sex differences. Institutional Review Board Statement: The use of these data did not require approval from an accredited ethics committee, as they are not covered by data protection principles, i.e., they are non-identifiable, anonymous data collected through an anonymous survey for teachers. In addition, based on Regulation (EU) 2016/679 of the European Parliament and of the Council on 27 April 2016 on the protection of individuals concerning the processing of personal data and on the free movement of such data (which entered into force on 25 May 2016 and has been compulsory since 25 May 2018), data protection principles do not need to be applied to anonymous information (i.e., information related to an identifiable natural person, nor to data of a subject that is not, or is no longer, identifiable). Consequently, the Regulation does not affect the processing of our information. Even for statistical or research purposes, its use does not require the approval of an accredited ethics committee. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets are available through the corresponding author on reasonable request. ¿A qué niño te gustaría parecerte?
2022-12-15T16:21:29.372Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "5483b9c947c4aa11f6ccf44a197daa91c74c02b8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/9/12/1952/pdf?version=1670849541", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38c711ccb1aa3791f36a035caf96f12579c3b2a6", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
2374604
pes2o/s2orc
v3-fos-license
The effect of ultralow-dose antibiotics exposure on soil nitrate and N2O flux Exposure to sub-inhibitory concentrations of antibiotics has been shown to alter the metabolic activity of micro-organisms, but the impact on soil denitrification and N2O production has rarely been reported. In this study, incubation and column transport experiments were conducted on soils exposed to as many as four antibiotics in the ng·kg−1 range (several orders of magnitude below typical exposure rates) to evaluate the impact of ultralow dose exposure on net nitrate losses and soil N2O flux over time. Under anaerobic incubation conditions, three antibiotics produced statistically significant dose response curves in which denitrification was stimulated at some doses and inhibited at others. Sulfamethoxazole in particular had a stimulatory effect at ultralow doses, an effect also evidenced by a near 17% increase in nitrate removal during column transport. Narasin also showed evidence of stimulating denitrification in anaerobic soils within 3 days of exposure, which is concurrent to a statistically significant increase in N2O flux measured over moist soils exposed to similar doses. The observation that even ultralow levels of residual antibiotics may significantly alter the biogeochemical cycle of nitrogen in soil raises a number of concerns pertaining to agriculture, management of nitrogen pollution, and climate change, and warrants additional investigations. SMX and SDZ are also among the most frequently detected antibiotics in groundwater with reported concentrations ranging from 0.08 ng·L −1 12 to 1.11 μ g·L −1 13 . Assuming a partition coefficient (Kd) of 2.0 L•kg-1 14 , the concentration in saturated soils can be estimated between 0.16 ng·kg −1 and 2.22 μ g·kg −1 though this may vary depending upon the antibiotic source (e.g., sewage sludge vs. groundwater) and is subject to rapid dissipative losses 15 . The effect of all four selected antibiotics on gross denitrification was measured in terms of nitrate losses from anaerobic pot incubations in which soils were exposed to ng·kg −1 doses. NAR and SMX generated the strongest responses and were selected for additional study. SMX is among the few veterinary antibiotics shown to leach into the saturated zone 16 and was therefore chosen for saturated column experiments. N 2 O flux experiments, conducted over moist soils, were performed using NAR, which is less mobile 17 and tends to sorb in the upper, temporally moist soil horizons where N 2 O is easily lost to the atmosphere. Results and Discussion Anaerobic nitrate reduction. KNO 3 solutions with various low doses of selected antibiotics (SMX, SDZ, NAR, and GTC) were added to pre-incubated soils, incubated, and extractable nitrate was determined (see Materials and Methods for details). All four antibiotic treatments yielded some combination of stimulated (% Control > 100%) and inhibited nitrate losses (% Control < 100%) and exhibited a temporal trend toward inversion, e.g., early stimulation followed by inhibition after longer incubation periods (see Table 1). Analysis of variance (ANOVA) identified statistically significant dose-responses in for 3 of the 4 antibiotics tested (see Table 2); the majority of these were observed in soils treated with SMX. Figure 1 illustrates the time-dose response (in terms of percentage of extractable nitrate lost relative to the control) in soils treated with SMX. Four statistically significant, U-shaped dose response curves (p < 0.05) in which nitrate losses initially exceed that of the control at the lowest (1 ng·kg −1 , 207%) and highest (1000 ng·kg −1 , 123%) doses but are inhibited relative to the control at 10 ng·kg −1 (12%) are observed. This overall pattern is maintained for a total of 4 days, after which the magnitude of both stimulation and inhibition decline. On Day 5, only the 1 ng·kg −1 dose corresponds to stimulated nitrate losses. Treatment with SDZ, NAR, and GTC resulted in far less distinct time-dose-response patterns, but showed an overall tendency for the rate of nitrate removal to increase as a result of exposure (Table 1). Where SDZ was applied, no individual dose-response was determined to be statistically significant (Table 2), but a general pattern of accelerated nitrate losses were observed at one or more sampling points for all four doses (Table 1). These were most commonly observed on Days 1 and 2 and the lowest dose (1 ng·kg −1 ) yielded a stimulatory effect for 4 of the 5 days tested. In soils treated with NAR, all four doses stimulated nitrate losses on Day 1 and Day 3 and all resulted in a diminished removal rate on Day 5 ( Table 1). Three of these doses (1, 10, and 1000 ng·kg −1 ) were observed to correspond with increased nitrate removal rate on all but the 5 th day of sampling. Both the maximum stimulation (1000 ng·kg −1 , 199%) and a significant dose-response occurred on Day 3 (p = 0.02, Table 2). Higher doses of GTC (100 ng·kg −1 and 1000 ng·kg −1 ) also stimulated nitrate removal for four of the five days tested (Table 1). Though stimulation of the greatest magnitude occurs on Day 2 (144%, 100 ng·kg −1 ), a Table 2. Results of One-Way ANOVA for soils treated with 1, 10, 100, or 1000 ng·kg −1 sulfamethoxazole, sulfadiazine, narasin, and gentamicin over a five-day sampling period. The F-statistic was calculated for concentration of nitrate measured in triplicate samples grouped by dose. Dose-response relationships are deemed statistically significant where F stat > F crit . P-values less than 0.05 are shown in bold. (Table 2), where inhibition observed at 1 and 1000 ng·kg −1 contrasts with stimulation the two middle doses (10 and 100 ng·kg −1 ). The results of these anaerobic denitrification experiments provide evidence that ecologically significant microbial communities in soil and sediment may have a statistically significant dose-response when exposed to antibiotics at ultra-low concentrations (ng·kg −1 ). The most frequently observed effect was an accelerated loss of soil nitrate, which stands in contrast to expectation because antibiotics are generally employed to inhibit microbial activity. Based upon broad temporal trends exhibited by these results (stimulation observed in 63% of samples on Days 1-4 and 75% inhibited on Day 5) and the distinctive U-shaped dose-response curve corresponding to SMX treatment, it is tempting to draw some comparison between these outcomes and direct stimulation hormesis ( Figure S1) in which sub-inhibitory exposure to a toxin can produce a stimulatory effect in the target organism 18 . Though it is possible that hormetic responses can and do occur in soils exposed to these antibiotics, any apparent hormetic effect is likely the result of population-level consequences resulting from individual hormesis and not the hormetic response in and of itself. An alternate and perhaps simpler hypothesis is that accelerated nitrate reduction is the functional outcome of selective antibiotic pressure within the more complex soil microbial community. For example, NAR is active against gram-positive bacteria and since most denitrifying organisms are gram-negative 19 , NAR is unlikely to inhibit or stimulate growth or enzymatic activity within this functional group. On the other hand, inhibition of one or more gram-positive organisms in the soil microbial community is expected and may increase the availability of resources to competing organisms, including the gram-negative denitrifiers, allowing them to grow at the expense of inhibited species. Evidence that both broad-spectrum and gram-positive/gram-negative antibiotics can and do affect the structure and function of soil microbial communities at higher doses (mg·kg −1 ) is abundant 20 . Of the antibiotics tested in the present study, for example, SDZ has been reported to decrease microbial diversity 21 and to increase the ratio of ammonia oxidizing archaea to ammonia oxidizing bacteria 22 . At comparable doses, SDZ and SMX 23,24 have both been observed to increase the ratio of fungi to bacteria in soils. Differences in antibiotic agency, i.e., broad-spectrum vs. gram negative/positive, between different antibiotics can be expected to impact the microbial population differently and may account for variations in the overall dose-time-response curves reported here but does not explain why maximum stimulation in the sulfonamides corresponds to the lowest doses (1 ng·kg −1 ) but occurs in NAR and GTC-treated soils at higher doses (1000 ng·kg −1 and 100 ng·kg −1 , respectively). Denitrification in saturated sediment columns. Where the effects of antibiotics on soil function have been evaluated, denitrification has consistently been shown to be inhibited where higher doses of antibiotics (> 500 μ g·kg −1 ) were administered to soil 2 , sediment 5,25,26 and groundwater 3,27 . The consistency of these results contrast greatly to the combined stimulation and inhibition reported here for ng·kg −1 doses in anaerobic soils and further to the results of anaerobic column experiments. Figure 2 illustrates effluent nitrate concentration (as a % of influent concentration) for a set of six columns receiving a 1 mM nitrate influent solution. Starting from t = 24 hours, 1 ng·L −1 SMX was continuously added to the influent of three of these columns. Prior to the addition of SMX, approximately 60% of influent nitrate was reduced during transit through each of the six columns. As the experiment continued, nitrate reduction in the three control columns showed slight diurnal variations, possibly resulting from temperature changes in the laboratory, but the overall average remained relatively constant at ~60%. In contrast, the columns receiving influent spiked with 1 ng·L −1 SMX showed an increase in overall nitrate reduction, with total nitrate losses increasing from an initial 60% to nearly 90% at the end of the experiment. According to student t-tests, this increase is statistically significant at or above the 95% confidence level from t = 30 through the end of the experiment (see Supplementary Information). Unlike the anaerobic incubation experiment where the maximum stimulatory effect of SMX was observed on Day 1, stimulation in the column experiments appears to steadily increase over time. The discrepancy between these results may indicate that the stimulatory effect of SMX at the 1 ng·kg −1 or 1 ng·L −1 level is reduced over time by biodegradation. The soil used for the anaerobic incubation experiment received only a single dose of SMX at the beginning of the experiment whereas the columns received a steady supply of SMX-spiked influent that was prepared daily. The gradual increase in denitrification rate relative to the control might indicate that any microbial shift resulting from 1 ng·L −1 SMX exposure is both maintained and enhanced by continued antibiotic pressure at this dose. N 2 O Flux. Where any changes in denitrification rate or potential in soil and sediment are observed, changes in the flux rate of N 2 O, a powerful greenhouse gas are also likely. Though at least one previous study has reported a decrease in N 2 O from mineral soils treated with 1-1000 μ g·L −1 SMX 5 , the opposite effect was observed in moist soils treated with 1-1000 ng·kg −1 NAR. As seen in Fig. 3, the average N 2 O flux is around 0.1 ppm·day −1 for all antibiotic treatments and the control after only one day of incubation, but on Day 3 a statistically significant dose-response emerged (see Table 3, p = 0.0067). The dose-response observed is nearly linear with N 2 O flux ranging from 0.1 ppm·day −1 (Control) to approximately 0.4 ppm·day −1 (1000 ng·kg −1 ). Although NAR was also shown to stimulate nitrate reduction at each of these doses on Day 3 (Table 1) Conclusions Disturbances to the biogeochemical nitrogen cycle have been reported in soils and sediment exposed to a wide-range of antibiotic compounds. The effects observed at both ultralow (ng·kg −1 ) and moderate (μ g·kg −1 ) antibiotic concentrations include shifts in microbial diversity and community structure as well as overall function, which raises a number of concerns pertaining to agriculture, nitrogen management, and climate change. In agriculture, factors controlling microbial N-cycling are well-characterized and the resulting relationships have been used to develop a number of different modeling tools to improve nitrogen use efficiency and reduce nitrogen loading rates to sensitive ecosystems 29,30 . At present, these models do not take into account potential temporal and functional shifts in the biogeochemical nitrogen cycle that may arise when soil microorganisms are exposed to antibiotics. Natural mitigation of aquatic nitrate pollution, which is tied to a number of human health risks 31 and to the degradation of aquatic ecosystems 32,33 may also be affected. Excess nitrate leached from soil is significantly reduced during transport through soil and sediment with denitrification (NO 3 − → N 2 O → N 2 ) estimated to reduce groundwater NO 3 − by as much as 50% on a watershed scale 34 . Denitrification is inhibited by a number of antibiotics when the dose exceeds 500 μ g·kg −1 , which is distinctly negative outcome in terms of water quality and the health of aquatic ecosystems, but may be stimulated for up to 3 or 4 days when soils are exposed to < 1 μ g·kg −1 SMX, SDZ, NAR, or GTC. A stimulated response at physically and biologically reduced concentrations might partially counter high-dose inhibition by enhancing denitrification over longer, low-dose exposures, but appears to have the potential to increase microbial production of nitrous oxide (N 2 O), a powerful greenhouse gas and the leading modern contributor to stratospheric ozone depletion 35 . Whether these pathways or anaerobic methane (CH 4 ) production may also be stimulated by exposure to ultralow doses of antibiotics is presently unknown, but is very relevant to climate research. Based upon the growing body of evidence suggesting that both low and high dose antibiotics in the terrestrial environment can and do affect ecologically important aspects of the biogeochemical nitrogen cycle, additional research is strongly encouraged to include: (1) a larger number of antibiotics tested at both low (ng·kg −1 ) and high (μ g·kg −1 ) exposure levels, (2) a wide variety of different soils and sediments (3) use of isotopic tracers to better constrain N 2 O source where denitrification and Experimental columns were spiked with 1 ng·L −1 SMX from t = 24 to t = 108. Triplicate columns were run for the spiked as well as the control tests. Statistically different nitrate reduction (p < 0.05) was observed from t = 30 to t = 108 and is indicated with solid markers. Scientific RepoRts | 5:16818 | DOI: 10.1038/srep16818 nitrification are affected, and (4) chronic and/or repeat exposure tests to determine whether single-dose effects are persistent and/or cumulative and the role of antibiotic resistance in those changes. Materials and Methods Statistical Analysis. Student t-tests were used to evaluate the statistical significance of individual treatments relative to the control at each sampling point (95% confidence interval) and an analysis of variance (ANOVA) was used to determine whether dose-responses (C/C 0 ) were statistically significant at the 95% confidence level. Comparison of group means with multiple t-tests would lead to significant Type-1 errors (e.g., 14.3% for 3 t-tests) whereas the Type-1 errors remain at 5% in one way ANOVA analysis of multiple group means 36 . Soil Sampling. The soil used in this study was sampled from a coastal farm in (Bull's Eye Farm) along the Upper Indian River Bay, near Milford, Delaware. The history of the site is known beyond 20 years by personal communication with the farmer who leases the land and the authors are assured that the soils have not previously been exposed to antibiotics. Groundwater sampling conducted at this site in 2012 corroborates this conclusion (unpublished data). Sandy loam topsoil and a sandy subsoil Topsoil samples (sandy loam) were composited from 10 cm cores, air-dried, sieved to 2 mm, and stored at 4 °C. The subsoil (sandy) was collected from the saturated zone at 2 meters depth using an auger. Following collection, the samples were air-dried and stored at 4 °C. Anaerobic Incubation Experiment. A set of 48 soil samples (10 grams each, air-dry basis) were treated with 10 mL of 12.5 mg/mL glucose solution and then pre-incubated at 25 °C in 50 mL centrifuge tubes in order to establish anaerobic conditions and deplete residual nitrate from the soil. Extractable nitrate was confirmed to be zero after 9 days. The pre-incubated samples were then dosed with 125 mg glucose, 100 mg KNO 3 and 1, 10, 100, or 1000 ng·kg −1 narasin, gentamicin, sulfadiazine, or sulfamethoxazole under N 2 gas as a 1 mL solution. Each treatment was performed in triplicate, with control samples receiving no antibiotic. Following amendment, the topsoil samples were incubated in the dark at 25 °C for an additional 1-5 days and then extracted with 10 mL of 1 M KCl. The extractable nitrate was quantified using a SEAL AQ2 Discrete Nutrient Analyzer (Seal Analytical, Mequon, Wisconsin, USA). N 2 O Flux Experiment. 75 g air-dried soil was measured into 144 polypropylene containers (4 cm × 4 cm × 6 cm) and moistened with 10 mL Milli-Q water. The containers were capped and pre-incubated at room temperature for 4 days. Following the pre-incubation period, the soils were treated with an antibiotic solution (0, 1, 5, 10, 50, 100, 500, or 1000 ng·kg −1 Narasin final concentration) and a nutrient solution (34 mg·mL −1 (NH 4 ) 2 SO 4 and 21 mg/mL KNO 3 ). Additional Milli-Q was added to raise the total moisture content to 40% Water-Filled Pore Space (WFPS) and the containers were placed inside 500 mL Kilner Jars outfitted with two gas-tight sampling ports. Headspace samples were collected from 6 replicates for each treatment at 24, 48, and 72 hours after the addition of antibiotic and nutrient solutions. Samples were transferred to evacuated Exetainer vials and analyzed by Isotope Ratio Mass Spectrometry at the University of California Davis. Column Experiment. A set of six 15 × 2.5 cm (length × diameter) glass columns were packed with air-dried sandy subsoil. The columns were purged with CO 2 for 20 minutes and then saturated bottom to top with degassed Milli-Q water. All six columns underwent a two week pre-treatment during which a nutrient solution containing 0.5 mM NO 3 − and 0.4 mM glucose (Control) was continually passed through the columns at an average linear velocity of 1 m/day. After 2 weeks, effluent samples were collected in 6 hour increments. Twenty-four hours after the first fractions were collected the influent to three columns (Experimental) was modified by the continuous addition of 1 ng•L −1 sulfamethoxazole. The influent vessel, all tubing, and the columns were wrapped in aluminum foil to prevent photodegradation of the antibiotic during transit and additional fractions were collected for 3.5 days following the initial addition of antibiotic to the experimental columns. The nitrate concentration of effluent samples was determined using ion chromatography with an AS14A 5-μ m column (Dionex, Waltham, Massachusetts, USA).
2016-05-14T18:59:56.080Z
2015-11-26T00:00:00.000
{ "year": 2015, "sha1": "f0cb16f3c1fd14c8559a92cfc83940f835f50c6b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep16818.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d23a63eb639b4ffa57aacbf07024add66cab49bd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
257397445
pes2o/s2orc
v3-fos-license
Tigecycline Immunodetection Using Developed Group-Specific and Selective Antibodies for Drug Monitoring Purposes Tigecycline (TGC), a third-generation tetracycline, is characterized by a more potent and broad antibacterial activity, and the ability to overcome different mechanisms of tetracycline resistance. TGC has proven to be of value in treatment of multidrug-resistant infections, but therapy can be complicated by multiple dangerous side effects, including direct drug toxicity. Given that, a TGC immunodetection method has been developed for therapeutic drug monitoring to improve the safety and efficacy of therapy. The developed indirect competitive ELISA utilized TGC selective antibodies and group-specific antibodies interacting with selected coating TGC conjugates. Both assay systems showed high sensitivity (IC50) of 0.23 and 1.59 ng/mL, and LOD of 0.02 and 0.05 ng/mL, respectively. Satisfactory TGC recovery from the spiked blood serum of healthy volunteers was obtained in both assays and laid in the range of 81–102%. TGC concentrations measured in sera from COVID-19 patients with secondary bacterial infections were mutually confirmed by ELISA based on the other antibody–antigen interaction and showed good agreement (R2 = 0.966). A TGC pharmacokinetic (PK) study conducted in three critically ill patients proved the suitability of the test to analyze the therapeutic concentrations of TGC. Significant inter-individual PK variability revealed in this limited group supports therapeutic monitoring of TGC in individual patients and application of the test for population pharmacokinetic modelling. Introduction Tigecycline (TGC) is the first in the class glycylcycline antibiotic approved for treatment of complicated skin and soft tissue infections, intra-abdominal infections and communityacquired pneumonia. This third-generation tetracycline is characterized by more potent activity than tetracyclines of previous generations. TGC demonstrates a wide antimicrobial spectrum which includes both gram-positive and gram-negative bacteria, and anaerobes [1]. Importantly, it is effective against tetracycline-resistant organisms with efflux and ribosomal protection mechanisms of resistance and often retains activity against carbapenem-resistant strains of A. baumannii and Enterobacteriaceae, as well as methicillin-resistant S. aureus, which makes it an attractive agent for management of infections caused by multidrug-resistant (MDR) microorganisms [2]. Despite excellent in vitro activity, post-marketing trials of TGC have shown an increased risk of death compared to other drugs. Indeed, based on a large meta-analysis in over seven thousand patients, TGC therapy is accompanied by an increased risk of death and treatment failure [3]. This finding is commonly attributed to the failure of target concentration achievement, and high-dose TGC therapy has been associated with increased survival [2]. However, TGC therapy could be complicated by such hazardous events as drug-induced liver failure and coagulopathy, the latter being dose-dependent [4]. The coating buffer was 0.05 M carbonate-bicarbonate buffer (CBB, pH 9.6). The washing buffer and standard/sample dilution buffer were 0.15 M phosphate-buffered saline containing 0.05% Tween 20 (PBST, pH 7.2). The buffer for antibody and GAR-HRP was 1% BSA-PBST. The TMB-based substrate solution was provided by Bioservice (Moscow, Russia). Stop solution for termination of enzymatic reaction was 1 M H 2 SO 4 . High-binding polystyrene 96-well plates were from Costar (Corning, Durham, NC, USA). The same procedure was conducted for the preparation of coating conjugates based on GEL as a carrier. During formaldehyde condensation, three temperature regimes were maintained, and the resultant conjugates were designated as GEL-TGC(f)-25C, GEL-TGC(f)-37C, and GEL-TGC(f)-50C. Heterologous coating conjugate GEL(pi)-TGC was synthesized using periodate oxidation of GEL according to the procedure described in [16] for natamycin. Briefly, GEL (8 mg, 50 nmol) was oxidized by sodium periodate (100 eqs) in 1 mL of water for 20 min at RT. The excessive periodate was then removed by overnight dialysis against 5 L of water. One hundred molar equivalents of TGC in CBB (pH 9.6) was added to the oxidized GEL and stirred for 2 h at RT. Sodium borohydride (50 µL, 4 mg/mL) was added to reduce a Schiff base product, stirred for additional 2 h at RT, and then subjected to dialysis. Antibody Preparation The immunization procedure was carried out in accordance with the guidelines for the care and use of laboratory animals in biomedical research and was approved by the Ethical Committee for the Care and Use of Animals of I. Mechnikov Research Institute for Vaccines and Sera. Rabbits (2.0-2.5 kg) were obtained from the Scientific and Production Centre for Biomedical Technologies (Elektrogorsk, Russia). The animals were kept for one or more weeks to adapt to vivarium conditions and then were immunized with BSA-TGC(f). Initial immunization was performed with 100 µg of the immunogen emulsified in CFA, which was injected subcutaneously at multiple points on the back. Subsequent monthly boosters were performed with a stepwise decreasing dose of the immunogen [17]. One week after each booster, blood samples were taken from the marginal ear vein of the rabbits. The resulting sera were used to control the immune response. An equal volume of glycerol was added to the sera which then were stored at −20 • C. Competitive Indirect ELISA TGC quantification was performed according to the generally accepted procedure of competitive indirect ELISA and included the following steps: 1. Coating of antigens. GEL-based conjugates at the optimum concentration in CBB were coated on the plates (100 µL per well) overnight at 4 • C. 2. Antibody binding (competitive step). Working antibody solution prepared in 1%BSA-PBST (100 µL) was added to the wells together with 100 µL of TGC standard (1000-0.01 ng/mL, and 0 ng/mL) in PBST or samples and incubated for 1 h at 25 • C. 3. Detection of bound antibody using GAR-HRP. The latter reagent in 1%BSA-PBST was added in the amount of 100 µL per well and incubated for 1 h at 37 • C. All the steps described above were completed by washing three times with PBST to remove unreacted reagents. 4. Enzymatic step. The substrate mixture (100 µL) was added to the wells, and after 0.5 h the reaction was terminated with 100 µL of the stop solution. Colored reaction product intensity was read at 450 nm using a LisaScan spectrophotometer (Erba Manheim, Czech Republic). The optimal concentrations of antibody and coating antigens were determined in checkerboard titration experiments. Conjugates coated on the plates from solutions with different concentrations were bound by antibodies diluted to varying degrees. Pairs of immunoreagents whose binding absorbance at 450 nm was of 0.8-1.2 were then compared in a competitive assay to choose the most sensitive variant. The inhibitory activity of each concentration of the TGC standard was expressed as B/B0-the percentage of the output signal at a certain concentration of the standard to the maximum signal at zero concentration of the standard. Four-parameter-fitted standard curves were constructed as a function 'B/B0 vs. analyte concentration' to determine 50% inhibition concentration as sensitivity (IC50), limit of detection (IC10), and dynamic range (IC20-IC80) [18] for a competitive assay of TGC. Cross-reactivity of antibody toward other tetracyclines was calculated as the ratio IC50 TGC/IC50 analogue. Sample Collection, Pretreatment and Recovery Examination Blood serum samples were collected from healthy volunteers and from 4 critically ill patients treated with TGC in MEDSI Clinical Hospital #1. The informed consent form was signed by the patients or their legal representatives. The study was approved by MEDSI Clinic Independent Ethical Committee, Moscow, Russia (Protocol #29 15 April 2021). The assay was used to quantify TGC serum concentration in adult patients (#1-#4) with COVID-19 and secondary bacterial infections caused by MDR strains of K. pneumoniae, E. faecium, A. baumannii, and K. pneumoniae, respectively. TGC was used at a dose 50-100 mg every 12 h in combination with meropenem or polymyxin B at the discretion of the treating physician. Patients received a loading dose twice the maintenance dose on the first day of therapy. For PK analysis, 6-8 blood samples were collected over 12 h in each patient (n = 3) after at least 48 h of TGC therapy: before the infusion; 5 min, 30 min, 1 h, 2 h, 4 h, 8 h, and 11 h after the infusion. Each sample was collected in EDTA K4 tubes, centrifuged at 4000 rpm for 10 min, after which supernatant was withdrawn, frozen at −20 • C, and analyzed within a month. Before the analysis, serum samples were thawed and treated with TCA for deproteinization according to the previously described procedure [19]. The precipitated protein was separated by centrifugation for 5 min at 6800× g, and the resulting supernatant appropriately diluted with PBST was tested by ELISA. The blank sera from healthy volunteers were spiked with TGC standard at 0.3 and 1.5 mg/L, incubated 1 h at 37 • C, treated as above, and analyzed by ELISA to determine the rate of recovery as a percentage between TGCfound and TGCspiked. GraphPad Prism 8.0 software served to generate standard curves and calculate the concentration of TGC in the samples. PKanalix version 2020R1 (Lixoft SAS, Antony, France) was used for non-compartmental PK analysis. Preparation and Characteristics of Conjugated Antigens In a recent work by Xu et al. [13], to obtain antibodies against TGC, its closest analogue, 9-amino-minocycline, was used as an immunizing hapten, and its conjugation was carried out by the diazonium and glutaraldehyde methods. Here we attempted to retain the distinctive structural feature of TGC, the 2-(tert-butylamino)acetamide moiety, and used the entire TGC molecule as a hapten. Within this approach, one of the effective conjugation methods for tetracyclines, the Mannich reaction [15], was applied to prepare TGC-based conjugates. The UV spectra of conjugates demonstrated the changes occurred in the spectra of BSA-TGC(f) compared with BSA spectrum. Thus, as a result of conjugation, the spectral characteristics of TGC (two peaks at 243 nm and 278 nm) and BSA (peak at 278 nm) fused into a smoothed shoulder on the spectral curve at about 260 nm. The third most pronounced TGC peak at 347 nm in the conjugate was shifted to 383 nm ( Figure 1A). The spectra of BSA-TGC(f)-7.4 and BSA-TGC(f)-9.6 did not differ from each other but were somewhat more intense compared to BSA-TGC(f)-5.5. Therefore, the conjugation of tetracyclines using Mannich condensation under neutral and alkaline pH conditions was preferable, in contrast to the recommendations in [20]. The inhibitory reactivity of conjugates in competitive ELISA may reflect the number of reactive epitopes on the tested conjugates [21] and hence their specific immunogenic activity ( Figure 1B). Thus, BSA-TGC(f) conjugates prepared at different pH were compared for their inhibitory activity. BSA-TGC(f)-7.4 turned out to be, although slightly, more active than other conjugates, which is in full agreement with the spectral data. Therefore, BSA-TGC(f)-7.4 was chosen as an immunogen. The study of GEL-TGC(f)-25C, GEL-TGC(f)-37C, and GEL-TGC(f)-50C spectra showed a slightly more efficient conjugation in the latter case. Thus, BSA-TGC(f) conjugates prepared at different pH were compared for their inhibitory activity. BSA-TGC(f)-7.4 turned out to be, although slightly, more active than other conjugates, which is in full agreement with the spectral data. Therefore, BSA-TGC(f)-7.4 was chosen as an immunogen. The study of GEL-TGC(f)-25C, GEL-TGC(f)-37C, and GEL-TGC(f)-50C spectra showed a slightly more efficient conjugation in the latter case (data not shown). Inhibitory activity of prepared conjugates in ELISA using #32 antibody and TF(pi)-CTC as a coating antigen [15]. The formation of heterologous conjugate GEL(pi)-TGC was confirmed immuno-chemically by antibody binding when it was used as a coating antigen. Antibody Preparation, ELISA Development and Characteristics In addition to the previously developed antibodies with group recognition of tetracyclines #32 [15], the present research was aimed to obtain a new antibody to TGC for comparison of their analytical properties. Immunization was carried out with a stepwise decreasing dose of BSA-TGC(f) according to the scheme (3 × 100 -3 × 75 -3 × 50 µg) shown in Figure 2. The rise of the working antibody titer ceased with the fifth injection of the immunogen, reaching 20,000. However, a more important criterion, the antibody maturation, gradually improved and reached its best value after the seventh booster immunization. The sensitivity (IC50) of the assay based on #100/7 antibody was 0.23 ng/mL. These antibodies were chosen for further experiments. BSA and conjugate concentrations were 0.1 mg/mL, and TGC concentration was 0.01 mg/mL in water. (B) Inhibitory activity of prepared conjugates in ELISA using #32 antibody and TF(pi)-CTC as a coating antigen [15]. The formation of heterologous conjugate GEL(pi)-TGC was confirmed immunochemically by antibody binding when it was used as a coating antigen. Antibody Preparation, ELISA Development and Characteristics In addition to the previously developed antibodies with group recognition of tetracyclines #32 [15], the present research was aimed to obtain a new antibody to TGC for comparison of their analytical properties. Immunization was carried out with a stepwise decreasing dose of BSA-TGC(f) according to the scheme (3 × 100-3 × 75-3 × 50 µg) shown in Figure 2. The rise of the working antibody titer ceased with the fifth injection of the immunogen, reaching 20,000. However, a more important criterion, the antibody maturation, gradually improved and reached its best value after the seventh booster immunization. The sensitivity (IC50) of the assay based on #100/7 antibody was 0.23 ng/mL. These antibodies were chosen for further experiments. Both anti-BSA-TC(f) (Ab#32) and anti-BSA-TGC(f) (Ab#100) antibodies were tested for their reactivity with two types of coating antigens, GEL-TGC(f) and GEL(pi)-TGC, to generate optimized ELISA variants. The best characteristics were found for heterologous pair of reagents [22]. The first one was based on Ab#32 and heterologous hapten coating conjugate GEL-TGC(f), whereas the second pair of reagents included Ab#100 and heterologous conjugation coating antigen GEL(pi)-TGC. Specificity studies confirmed that the first ELISA was group-specific ( Figure S1) and capable of detecting, along with first-and second-generation tetracyclines, the representatives of the third generation: TGC and EVC ( Table 1). The assay based on new antibodies was TGC-selective without any cross-reactions with analogues. This serves as an argument in favor of the unique 2-(tert-butylamino)acetamide fragment of TGC being the target epitope for Ab#100 ( Figure S2). Both anti-BSA-TC(f) (Ab#32) and anti-BSA-TGC(f) (Ab#100) antibodies were tested for their reactivity with two types of coating antigens, GEL-TGC(f) and GEL(pi)-TGC, to generate optimized ELISA variants. The best characteristics were found for heterologous pair of reagents [22]. The first one was based on Ab#32 and heterologous hapten coating conjugate GEL-TGC(f), whereas the second pair of reagents included Ab#100 and heterologous conjugation coating antigen GEL(pi)-TGC. Specificity studies confirmed that the first ELISA was group-specific ( Figure S1) and capable of detecting, along with first-and second-generation tetracyclines, the representatives of the third generation: TGC and EVC ( Table 1). The assay based on new antibodies was TGC-selective without any cross-reactions with analogues. This serves as an argument in favor of the unique 2-(tert-butylamino)acetamide fragment of TGC being the target epitope for Ab#100 ( Figure S2). Thus, two ELISA systems were created for TGC determination. Due to their different specificities, it can be postulated that they recognized different fragments (common and individual) of the TGC molecule; both were further employed for therapeutic drug monitoring purposes. The standard curves for these assays are shown in Figure 3, and analytical characteristics are compared in Table 2. If the Ab#32-ELISA provided a level of sensitivity comparable to the previous report [13], then the Ab#100-based assay had an order-of-magnitude better sensitivity and strict TGC selectivity. To the best of the authors' knowledge, no other immunoassay systems for TGC are available in the literature. Sample Pretreatment and Recovery Experiments The sensitivity of the developed tests was high enough to analyze microvolumes of biosamples after a high degree of dilution, which eliminated possible interference from their matrix. However, the deproteinization as a pretreatment procedure ensures unification of the samples, excluding possible interferences and interactions with plasma proteins [17,23], concentrations of which could vary significantly, e.g., in critically ill patients. Chromatographic methods for TGC detection, such as HPLC and LC-MS/MS, usually involve protein precipitation of serum samples with acetonitrile [24,25]. In comparison to the commonly used acetonitrile, another deproteinizing agent, TCA, has demonstrated a distinct advantage in tetracycline bioanalysis, which is its stabilization from epimerization and other degradation pathways [26]. For this reason, we have compared several sample preparation approaches involving simple dilution with assay buffer and deproteinization with TCA, MeOH, and ACN ( Table 3). The obtained TGC recovery data from spiked health volunteers' sera were satisfactory for the dilution approach (81-102%) and TCA-mediated deproteinization (74-89%) in both assay systems, whereas MeOH and ACN pretreatment of sera samples provided mostly poor recovery (24-69%). If the Ab#32-ELISA provided a level of sensitivity comparable to the previous report [13], then the Ab#100-based assay had an order-of-magnitude better sensitivity and strict TGC selectivity. To the best of the authors' knowledge, no other immunoassay systems for TGC are available in the literature. Sample Pretreatment and Recovery Experiments The sensitivity of the developed tests was high enough to analyze microvolumes of biosamples after a high degree of dilution, which eliminated possible interference from their matrix. However, the deproteinization as a pretreatment procedure ensures unification of the samples, excluding possible interferences and interactions with plasma proteins [17,23], concentrations of which could vary significantly, e.g., in critically ill patients. Chromatographic methods for TGC detection, such as HPLC and LC-MS/MS, usually involve protein precipitation of serum samples with acetonitrile [24,25]. In comparison to the commonly used acetonitrile, another deproteinizing agent, TCA, has demonstrated a distinct advantage in tetracycline bioanalysis, which is its stabilization from epimerization and other degradation pathways [26]. For this reason, we have compared several sample preparation approaches involving simple dilution with assay buffer and deproteinization with TCA, MeOH, and ACN ( Table 3). The obtained TGC recovery data from spiked health volunteers' sera were satisfactory for the dilution approach (81-102%) and TCA-mediated deproteinization (74-89%) in both assay systems, whereas MeOH and ACN pretreatment of sera samples provided mostly poor recovery (24-69%). Comparison of PBST dilution with TCA deproteinization as pretreatment procedures performed with real serum samples did not reveal significant differences between the data. Similar results were obtained in group-specific and TGC-selective analytical systems ( Figure S3). Both of the studied sample pretreatment approaches were found to be acceptable, but regardless of this, further experiments were performed with deproteinized samples. The sera samples (n = 31) from four critically ill patients were taken to quantify TGC and establish a correlation between two assay systems. As can be seen from Figure 4, the TGC-specific (Ab#100) ELISA data were in good agreement with those obtained in the group specific (Ab#32) assay system. It can be assumed that slightly higher values in Ab#32-ELISA compared to Ab#100-ELISA (95%) could be caused by the recognition of TGC metabolites by group-specific antibodies. Comparison of PBST dilution with TCA deproteinization as pretreatment procedures performed with real serum samples did not reveal significant differences between the data. Similar results were obtained in group-specific and TGC-selective analytical systems. (Figure S3). Both of the studied sample pretreatment approaches were found to be acceptable, but regardless of this, further experiments were performed with deproteinized samples. The sera samples (n = 31) from four critically ill patients were taken to quantify TGC and establish a correlation between two assay systems. As can be seen from Figure 4, the TGC-specific (Ab#100) ELISA data were in good agreement with those obtained in the group specific (Ab#32) assay system. It can be assumed that slightly higher values in Ab#32-ELISA compared to Ab#100-ELISA (95%) could be caused by the recognition of TGC metabolites by group-specific antibodies. The strong correlation between these two groups of data with a coefficient of 0.95 and R 2 = 97% can serve as a mutual confirmation of the reliability of the results obtained. This data consistency also confirms the suitability of the group-specific assay system (Ab#32-ELISA) to correctly measure both TGC and other tetracycline analytes using a correspondent calibration. However, further experiments on TGC pharmacokinetics were performed using a specific tool (Ab#100-ELISA). Tigecycline Pharmacokinetics in Critically Ill Patients Using the Developed ELISA Steady-state TGC pharmacokinetics was described for three out of four critically-ill patients with COVID-19 and secondary bacterial infections. Two patients received continuous venovenous hemodialysis (Patients 1 and 2); one patient was on extracorporeal membrane oxygenation (Patient 3). TGC was used as a 1 h infusion at a dose of 50-100 mg The strong correlation between these two groups of data with a coefficient of 0.95 and R 2 = 97% can serve as a mutual confirmation of the reliability of the results obtained. This data consistency also confirms the suitability of the group-specific assay system (Ab#32-ELISA) to correctly measure both TGC and other tetracycline analytes using a correspondent calibration. However, further experiments on TGC pharmacokinetics were performed using a specific tool (Ab#100-ELISA). Tigecycline Pharmacokinetics in Critically Ill Patients Using the Developed ELISA Steady-state TGC pharmacokinetics was described for three out of four critically-ill patients with COVID-19 and secondary bacterial infections. Two patients received continuous venovenous hemodialysis (Patients 1 and 2); one patient was on extracorporeal membrane oxygenation (Patient 3). TGC was used as a 1 h infusion at a dose of 50-100 mg selected by the treating physician. TGC concentration curves over time are presented in Figure 5, observed PK parameters are presented in Table 4. At this point, the observed distinctions between PK parameters cannot be ascribed to any specific clinical and demographic parameter, but rather represent inter-individual PK variability in critically ill patients, as also shown in other studies [8,12]. The figure nicely demonstrates how similar serum concentrations are achieved with different doses due to individual changes of clearance and volume of distribution. Overall, the developed assay has demonstrated its suitability for the measurement of TGC therapeutic concentrations in the serum of critically ill patients, and thus may be used for population PK studies and therapeutic drug monitoring in individual patients. Conclusions Two immunodetection systems for TGC were developed based on indirect competitive ELISA using class-specific and TGC-selective antibodies. Using a specially selected coating antigen, a previously developed group recognition antibody was adapted to detect, in addition to the first-and second-generation tetracyclines, also the third generation representatives, TGC and EVC. The de novo-generated antibody against BSA-TGC conjugate demonstrated strict TGC selectivity. Thus, two immunoassays capable of recognizing different TGC moieties were compared as tools for therapeutic monitoring purposes. Both assays demonstrated high sensitivity with half-inhibition TGC concentrations of 0.23 and 1.59 ng/mL and satisfactory recovery from spiked volunteers' sera of 74-102%. Sample pretreatment, such as simple dilution with assay Conclusions Two immunodetection systems for TGC were developed based on indirect competitive ELISA using class-specific and TGC-selective antibodies. Using a specially selected coating antigen, a previously developed group recognition antibody was adapted to detect, in addition to the first-and second-generation tetracyclines, also the third generation representatives, TGC and EVC. The de novo-generated antibody against BSA-TGC conjugate demonstrated strict TGC selectivity. Thus, two immunoassays capable of recognizing different TGC moieties were compared as tools for therapeutic monitoring purposes. Both assays demonstrated high sensitivity with half-inhibition TGC concentrations of 0.23 and 1.59 ng/mL and satisfactory recovery from spiked volunteers' sera of 74-102%. Sample pretreatment, such as simple dilution with assay buffer or TCA-mediated deproteinization, was both adequate and took no more than 5 min. The developed ELISAs were applied to describe TGC pharmacokinetics for three critically ill COVID-19 patients with secondary bacterial infections. Despite the limited study group of patients, the resulting PK profile varied significantly, indicating the relevance of individual therapeutic TGC monitoring. The concentrations measured by both analytical systems, differing in specificity, strongly correlated with a coefficient of 0.95 and R 2 = 97%, confirming the validity of the data and the reliability of the tests. Informed Consent Statement: Signed by the patients or their legal representatives. Data Availability Statement: The datasets used and/or analyzed during the current study are available from the corresponding author on request.
2023-03-08T16:16:21.235Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "c4661d27abd3408d0570647680e87521732394bf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6374/13/3/343/pdf?version=1677916666", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74fcbc026ef5cb7cb71ef16aafeee5952d04f603", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
271121805
pes2o/s2orc
v3-fos-license
Speech pauses in speakers with and without aphasia: A usage-based approach Pauses in speech are indicators of cognitive effort during language production and have been examined to inform theories of lexical, grammatical and discourse processing in healthy speakers and individuals with aphasia (IWA). Studies of pauses have commonly focused on their location and duration in relation to grammatical properties such as word class or phrase complexity. However, recent studies of speech output in aphasia have revealed that utterances of IWA are characterised by stronger collocations, i.e., combinations of words that are often used together. We investigated the effects of collocation strength and lexical frequency on pause duration in comic strip narrations of IWA and non-brain-damaged (NBD) individuals with part of speech (PoS; content and function words) as covariate. Both groups showed a decrease in pause duration within more strongly collocated bigrams and before more frequent content words, with stronger effects in IWA. These results are consistent with frameworks which propose that strong collocations are more likely to be processed as holistic, perhaps even word-like, units. Usage-based approaches prove valuable in explaining patterns of preservation and impairment in aphasic language production. © 2024 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC Introduction Pauses in speech are indicators of cognitive effort.The frequency, location, and duration of filled (e.g., um) or silent pauses reveal neurocognitive processes underpinning language production in both non-brain-damaged (NBD) speakers and individuals with neurocognitive pathologies, such as aphasia (Butterworth, 1979;Hird & Kirsner, 2010;Klatt, 1980;Watanabe et al., 2008).For example, pause duration can be longer before the initiation of sentences with high syntactic complexity (Ferreira, 1991;Grosjean et al., 1979) or longer noun phrases (Strangert, 1997, pp. 239e242).The grammatical role of a word or part of speech (PoS) further influences pauses in spontaneous speech: in NBD English speakers, pauses tend to occur more before content rather than function words (Maclay & Osgood, 1959), and are longer before verbs relative to nouns (Seifart et al., 2018).Also, pauses in speech are longer before or within utterances with non-canonical syntactic structures (e.g., passive sentences) (Krivokapi c, 2007;Krivokapi c et al., 2022;Ruder & Jensen, 1972).In individuals with aphasia (henceforth, IWA), pauses are longer and more frequent than in NBD individuals (Angelopoulou et al., 2018;Sahraoui et al., 2015).Pauses in speech may also capture variance in populations with acquired language disorders, such as individuals with post-stroke aphasia or primary progressive aphasia (DeDe & Salis, 2020;Hird & Kirsner, 2010;Mack et al., 2015;Potagas et al., 2022). Most studies on the effect of linguistic factors on pauses have been conducted under the framework of Generative Grammar and related theories, which determine processing difficulty by the properties of individual words and complexity of phrase structures.By contrast, usage-based theories suggest that language organization is fundamentally shaped by experience, in particular semantic and pragmatic function, as well as usage-frequency (how often language forms are encountered in everyday communication; Bybee, 2010;Bybee & Beckner, 2015;Langacker, 1987a).Lexical frequency predicts cognitive processing demands (Hasher & Zacks, 1984).NBD adults process high-frequency words more accurately and faster than low-frequency items (Balota et al., 2004;Forster & Chambers, 1973;Stemberger & MacWhinney, 1986).Higher lexical frequency has been associated with shorter gaze duration in reading (Ong & Kliegl, 2008).Similar lexical frequency effects are evident in pauses in the speech of both NBD and IWA, with longer pauses before lower frequency forms (Beattie & Butterworth, 1979;Geffen et al., 1979;Mack et al., 2015;Maclay & Osgood, 1959;Pashek & Tompkins, 2002). While lexical frequency effects are well established, statistical properties of language also manifest in collocation strength between word combinations.Collocation strength refers to the frequency in which words occur together, weighted by the frequency of each word (Gries, 2010;Schneider, 2018).Collocation strength is not merely a function of frequency.For example, in British English, the phrase "it's lovely" has a higher collocation strength than "it's great", despite "great" being more frequent than "lovely" (BNC, 2007).Collocation strength indicates the degree to which words are associated with another in everyday language use.Zimmerer et al. (2018) found that in semi-structured interviews, IWA (both fluent and non-fluent) produced not only more frequent words, but also more strongly collocated word combinations than non-aphasic speakers with right-hemisphere damage and NBD speakers.The authors employed a software developed for this research, the Frequency in Language Analysis Tool (FLAT; Zimmerer et al., 2016), to extract statistical language features.FLAT determines the usage frequency of every word, bigram (two-word combination) and trigram (threeword combination) from orthographic transcripts based on the spoken sub-corpus of the British National Corpus (BNC, 2007) and calculates collocation strength alongside other measures.A follow-up study replicated the results in a new sample of speakers with non-fluent aphasia (Bruns et al., 2019).Investigating use of the expression "I don't know", it also demonstrated that IWA use strong collocations in pragmatically appropriate ways.Increased collocation strength in spontaneous speech was also found in three types of primary progressive aphasia, behavioural variant frontotemporal dementia (Zimmerer et al., 2020) and speakers with probable Alzheimer's disease (Zimmerer et al., 2016). High collocation strength is one indicator of "formulaic language" (Schmitt et al., 2019): phrases and utterances that are processed not only as combinations of individual words, but as one holistic unit (Conklin & Schmitt, 2012;Schmitt, 2012).According to some usage-based frameworks, language formulas are easier to process because they involve words that are strongly associated with each other, and therefore more easily co-activated (Langacker, 1987a(Langacker, , 1987b(Langacker, , 2008)).It is possible that some formulas are represented as single lexicalized units, and in this case, they would be easier to process because they require selection of fewer lexical representations and impose reduced combinatorial demands.Collocation strength analysis of continuous samples of IWA (see above) suggest that such combinations would be more resilient to disruption under conditions of lexical or grammatical impairment. Despite the established effects of collocation strength on language production, its effects on pauses has not yet been examined.This study, therefore, has two aims: (1) To see whether previous results showing increased lexical frequency and collocation strength in aphasia (Bruns et al., 2019;Zimmerer et al., 2018Zimmerer et al., , 2020) ) replicate in a new sample.We hypothesised that (H1) IWA will produce more frequent words and (H2) stronger collocations.(2) To go beyond measuring the properties of produced linguistic forms and investigate how the variables of frequency and collocation strength relate to pauses in speech and, therefore, to effort in online language processing.Three hypotheses emerged from this aim: (H3) Pause duration will be longer in IWA than in NBD individuals; (H4) Pauses will be shorter and fewer before words with higher lexical frequency, and this effect will be greater in IWA.(H5) Pauses will be shorter and fewer within combinations with greater collocation strength, and this effect will be greater in the IWA.The novelty of our study relies specially on the last two hypotheses, in which we predicted that increased cognitive demands in the production of less frequent lexical items and weaker collocations would be reflected in the duration of pauses. We investigated the effects of lexical frequency and collocation strength (as determined by FLAT) on pauses in spontaneous connected speech in IWA and NBD individuals.The participants narrated a (mostly) wordless cartoon.We measured the duration of silent pauses, filled pauses, or combinations of both before each word, and correlated these with the word frequency of the following word and the collocation strength of bigrams.We further entered a part of the speech as a covariate. Methods We report how we determined our sample size, all data exclusions, all inclusion/exclusion criteria, whether inclusion/ Pre-registration and post-hoc tests We pre-registered our analysis on the Open Science Framework website (https://osf.io/8nwe2/)after data collection and transcription, but before pause annotation and data analysis. We report results according to the pre-registered procedure, but because of properties of distributions which we did not consider at the time of pre-registration (zero inflation in particular), we added a post-hoc test better suited for these (see section 3.1.6.).We also explored the correlation between proxies for aphasia severity and their influence on pause duration (see section 3.1.7.). Participants Data were collected as part of a previous study on grammatical processing in aphasia (Mahmood et al., 2016).The NBD group was recruited from a university register of research volunteers.Ethics approval was granted by the relevant institutional Ethics Committee and all the participants provided informed consent to take part in the study (LC/2013/05).Inclusion criteria were English as the native language (all our participants were British English speakers), presence of aphasia in the IWA group and absence of aphasia in the NBD group.Exclusion criteria were a diagnosis of developmental or other cognitive disorders.The sample consisted of 20 NBD individuals (3 male, 17 female) and 20 IWA (16 male, 4 female). The sample size is consistent with previous work on pauses in aphasia (e.g., Angelopoulou et al., 2018).Table 1 shows sociodemographic information and language assessment results. Beyond production in discourse (see 2.3 below), we measured production of single words and comprehension of single words and sentences to further profile our speakers.We selected the Boston Naming Test (BNT; Goodglass et al., 2001) picture naming task to test lexical production, and the Comprehensive Aphasia Test (CAT; Swinburn et al., 2008) word-picture-matching and sentence-picture matching subtests for testing comprehension.Because we did not use the entire CAT, this study does not have a single standardised measure of aphasia severity.Thus, to examine the relationship between discourse and other measures, our correlation tests used BNT scores and a composite (mean) of CAT subtests. Test procedure Participants met with a researcher in a quiet room.Speech data were elicited using "The dinner party" cartoon (Fletcher & Birt, 1983).In this task, participants described an 8-picture story that contained no dialogue or narration.The instructions were: "Look at these pictures.Together, they make a story.Could you tell me in your own words everything you see going on in the pictures".Speech was digitally recorded and orthographically transcribed using F4transkript (Jones & German, 2016). Annotation procedure The audio of each sample was loaded into ELAN Linguistic Annotator V 6.0 (Max Planck Institute for Psycholinguistics, 2020).We counted words of each sample using the R package psych V.1.9(Revelle, 2022).After segmenting the words in the audio sample, the transcriptions were aligned to their respective segment in an annotation tier.Pauses were defined as any speech hesitation, which could be silent, filled with interjections, or containing both.Duration and location of pauses were identified based on visual and acoustic inspection of the spectrograph, first at 100% and subsequently at 30% playback rates.We measured word onsets and offsets and defined pauses as the time (measured in msec; ms) between one word's offset and the following word's onset.We combined filled and silent pauses in our analysis, as both can reflect processing demands in speech. We created an additional tier categorising the pauses: 1. "No pause", where the distance between words was between 0 and 250 ms.This rationale takes into account the observations of Goldman-Eisler (1968) that hesitations of this duration are related to articulatory processing; 2. "Silent pause", annotated as a hesitation greater than 250 ms with no acoustic signal between words; 3. "Filled pause", annotated as hesitation longer than 250 ms between words marked by filler words (interjections such as "uhm" or "uh"); 4. "Filled/Silent pause", annotated as a filled pause followed by a silent pause or vice versa.As silent and filled pauses both reflect processing demands in language production, we considered both in the statistical analysis.Fig. 1 shows the ELAN annotator interface.Lexical frequency and collocation strength were determined using the FLAT software (Zimmerer et al., 2016(Zimmerer et al., , 2018)).Lexical frequency was measured as the frequency of occurrence of a word in the spoken sub-corpus of the BNC ( 2007), in occurrences per million words.Collocation strength of each bigram was measured using t-scores (Gablasova et al., 2017;Gries, 2010), which are based on the raw frequency of each bigram and the frequency of its individual words.The formula is: tscore ab ¼ frequency ab À expected frequency ab ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi frequency ab p where frequency ab is the observed frequency of the bigram in the spoken BNC and expected frequency ab is the probability of the bigram if the order of the words was random.Following the procedure outlined in previous studies using FLAT (Bruns et al., 2019;Zimmerer et al., 2018Zimmerer et al., , 2020)), bigrams were excluded from the collocation strength analysis if they contained proper nouns or pseudowords.Lexical repetitions were removed so that the analysis only considered one instance of the repeated word, unless raters considered repetitions intentional.Ungrammatical combinations (e.g., "man think") were also excluded, since these would be classified as rare not because of greater language capacity, but because of failed combinatorial operations.In bigrams with a filled pause (e.g., uhm or uh), the interjection was removed, and the collocation strength between the words preceding and following the pause was measured (note that these interjections were considered for pause duration analysis).FLAT also automatically excludes combinations which cross sentence boundaries (marked with sentence-final punctuation) or utterance boundaries (annotated via line break).The total number of bigrams removed was 159 (4.3% of all bigrams); of those 159 bigrams removed, 124 were produced by IWA and 35 by the NBD group.These numbers primarily reflect that IWA produced more grammatical errors and less connected speech. We annotated PoS for each word using the R package "Spacyr" V.1.2.1 (Benoit & Matsuo, 2022), a wrapper to the Python "spaCy" library.Spacyr carries out a morphosyntactic analysis of each word in a transcript.Categories were grouped into two-factor levels: content words (nouns, verbs, adjectives, and adverbs) and function words (pronouns, prepositions, conjunctions, determiners, and interrogatives).Table 2 shows an excerpt from raw data. Outlier detection and log-transformation To identify outliers for lexical frequency and collocation strength values, we used Grubbs' test.This analysis was applied to the data after the removal of ungrammatical bigrams.There was no evidence of outliers for lexical frequency in the NBD group, G(1.75) ¼ .99,p ¼ 1, and IWA groups, G(1.56) ¼ .99,p ¼ 1.For collocation strength, the NBD group showed outliers with a t-score below À43.79 (N ¼ 18 out 2068; .87%) and above 125.87(N ¼ 58 out 2068; 2.81%), G(6.59) ¼ .97,p < .001.In the IWA group, outliers were found below À55.64 Since we entered every word and bigram that met our selection criteria into our models, we included pause duration values which were 0 ms (i.e.no pause).We subjected pause duration values to log transformation, with a bin size of .1 log units (Hird & Kirsner, 2010).After log transformation, the values for pause duration ranged from zero to five units.After transformation, the distribution was zero inflated; it had a mode at zero, followed by a normal distribution curve. We refitted lexical frequency values into the same pause duration log-transformed range to analyse and converge the data on comparable scales.The dataset for lexical frequency was rescaled with minimum and maximum (min ¼ 0, max ¼ 5).As collocation strength values converged in narrowed scales, we did not apply refitting on this dataset. Group comparisons Table 3 displays the results for each group and comparisons between groups.We carried out comparisons of pause duration, word count, lexical frequency and collocation strength using Wilcoxon signed-rank tests.We used Chisquare tests for proportions of content vs. function words.While all participants made filled and silent pauses, the proportion of filled pauses was lower than that of silent pauses.The IWA group produced significantly fewer words, more pauses, and longer pauses, with higher proportions of silent and filled þ silent pauses.The lexical frequency of content, function words and collocation strength values were higher for the IWA group.There were no significant differences between proportions of content and function words between groups. Linear mixed models To examine the effects of lexical frequency and collocation strength on pauses, we fitted linear mixed models (LMM) using the lme4 package (Bates et al., 2015) for R (R Core Team, 2020).Independent models were considered for Lexical Frequency and Collocation Strength analyses since Lexical Frequency and Collocation Strength are strongly correlated variables.For these models, we excluded words which were the first in an utterance to avoid confounding effects of utterance planning.Within each set, we determined the best model by starting with the simplest and adding interactions only when they resulted in a significantly better model, as determined by a likelihood ratio test. Table 4 summarizes all LMMs.In the reference models of each set, the predictors were Group (NBD or IWA) and PoS, while Lexical Frequency and Collocation Strength (respectively) were placed as random factors.In Model 1, the predictors of Pause Duration were Lexical Frequency (or Collocation Strength), Group and PoS.Model 2 tested the interaction between Lexical Frequency or Collocation Strength with Group.Model 3 tested the interaction between PoS and Group.Finally, Model 4 examined the three-way interaction between all predictors.Note that Models 1e4 had a simpler random effect structure as we were worried about convergence.As the result, estimates are anti-conservative. Effects of lexical frequency on pause duration Full results for each Lexical Frequency model are reported in Appendix A. The Reference model revealed a significant effect of Group (b ¼ .46,t ¼ 5.01, p < .001),as IWA made longer pauses, but no significant effect of PoS (b ¼ À.01, t ¼ À.43, p ¼ .67).Model 1 was not significantly better (X 2 (1) ¼ 0, p ¼ 1), but did show a significant effect of Lexical Frequency (b ¼ À.02, t ¼ À2.12, p ¼ .034), in addition to the significant effect of Group (b ¼ .49,t ¼ 5.20, p < .001)(Table 5).The strongest Model ), since pauses were longer before content words, and that effect was stronger for PwA (Table 5). With regards to variance explained models were weak; in Model 5, fixed effects explained 5% of the variance. Effects of collocation strength on pause duration We report full results for each Collocation Strength model in Appendix 2. The Reference model revealed a significant effect of Group (b ¼ .45,t ¼ 5.51, p < .001),as IWA had longer pauses than the NBD group.Model 1 was not significantly better than the Reference model (X 2 (1) ¼ 0, p ¼ 1).Once interactions were added, the main effect of Group remained significant, and interactions between Group and other predictors were also significant (Group: b ¼ .56,t ¼ 5.80, p < .001;Group*Collocation Strength: b ¼ À.003, t ¼ À2.22, p ¼ .001).Analysis using a likelihood ratio test found that Model 2, which contains an interaction between Collocation Strength and Group, had significantly greater explanatory power than simpler models (X 2 (1) ¼ 10.26, p ¼ .001;Table 6).In Model 2, the interaction between Collocation Strength and Group was statistically significant (R 2 ¼ .14, b ¼ À.003, t ¼ À2.22, p ¼ .001),as the effect of Collocation strength on pauses was stronger in IWA (Fig. 2).More complex models did not significantly improve explanatory power. Hurdle models Because of the zero-inflated distribution of pause durations, we followed the pre-registered analysis with Bayesian hurdle-Gaussian models (Heilbron, 1994)to corroborate LMM results.Hurdle-Gaussian models consist of two steps: the first aims to explain the zero values, and the second the positive values.We used hudl to further scrutinise results LMM, in particular Model 2 for Collocation Strength.See Appendix C for results from Hurdle models. Results from hurdle models investigating lexical frequency only partially supported linear model results.Model 1, which included all fixed variables but no interactions, showed a significant effects of Group (b ¼ .46,p < .001)and Lexical Frequency (b ¼ À.02, p ¼ .046) on pauses.However, a hurdle model with all interactions (Model 5) did not converge well and only determined significant main effects for Group and Lexical Frequency, while interactions were not significant.Significant effects only explained the distribution of non-zero values. For collocation strength, results were fully corroborated by hurdle models.There was moderate evidence of a Group effect, which was statistically significant (b ¼ .56,p < .001).The two-way interaction effect between Collocation Strength and Group was moderate and statistically significant (b ¼ À.004, p < .001).Pauses had a moderate probability of being shorter within bigrams with higher Collocation Strength, and the effect was stronger in IWA.Effects were only significant for explaining non-zero values.Table 4 e LMMs investigating predictors for pause duration.Because of different dataset sizes and strong correlations between predictor variables, different analyses were applied to Lexical Frequency and Collocation Strength models. Models without interactions Reference model Effects of aphasia severity on pause duration The duration of pauses in speech may vary with the severity of aphasia (Goodglass et al., 1964).We used performance on the BNT and a composite score based on CAT comprehension subtests as proxies for aphasia severity (see 2.2.).Both were significantly and strongly correlated, r(20) ¼ .71,p < .001.We applied separate LMMs to explore the influence between BNT and CAT comprehension composite scores on pause duration. Discussion Prior studies have shown that IWA produce words with higher lexical frequency and word combinations which are more strongly collocated (Bruns et al., 2019;Kittredge et al., 2008; et al., 2018, 2020).While these studies suggested more familiar language is produced with less effort, our study of speech pause data as indicators of effort provide clear evidence that this assumption is likely correct.Pauses in speech were extracted from "The dinner party" comic strip narrations produced by a group of individuals with mild to moderate aphasia and NBD controls.Samples of speech from IWA contained more and longer pauses, replicating previous findings (Angelopoulou et al., 2018;Perkins, 1995).Both groups produced more silent than filled pauses, but IWA showed a higher proportion of silences.Types of pauses may indicate efforts at different processing stages.The elicitation task requires constructing a plausible narration from a picture sequence and involves integrative processing across language and other cognitive networks, such as event processing.Interpretation of complex visual events may be dissociated from the linguistic description of such events (Brown et al., 2020;Fedorenko & Blank, 2020;Fedorenko & Varley, 2016;Ivanova et al., 2021).Our findings may reflect this dissociation, since it is assumed that filled pauses demonstrate higher level picture/event conceptualisation, whereas silent pauses indicate the cognitive demands of lexical searching or phrase construction (Angelopoulou et al., 2018;Butterworth, 1976Butterworth, , 1979)).The higher proportion and duration of pauses in IWA may reflect the greater challenge of language production, demonstrating increased cognitive and linguistic demands to construct utterances. IWA produced content words with higher lexical frequency, which is in concordance with previous research (Beattie & Butterworth, 1979;Geffen et al., 1979;Zimmerer et al., 2018).Furthermore, pauses were shorter before more frequent words, replicating previous findings (Goral et al., 2010).Note that LMM analysis, but not the post-hoc hurdle models, found that the effect of frequency on pauses was greater for IWA than NBD speakers, meaning that this particular interaction was not robust.Previously, high-frequency words are proposed to be easier to access than low-frequency words due to a higher resting state of underpinning neural networks (McClelland & Elman, 1986), but more recent studies using model simulations have proposed that more frequent words are more strongly weighted in the lexical network (Nozari et al., 2010).Thus, high-frequency words are more resilient to damage to lexical networks.Future research can integrate other variables that also influence pauses, such as neighbourhood density, age of acquisition, or grammatical category (Baayen et al., 2016), exploring how they interact, and which factors are stronger determinants of pause duration. In accordance with the results of Zimmerer et al. (2018), IWA produced bigrams with a higher collocation strength than NBD controls.A novel finding was the effect of bigram collocation strength on pauses in speech, with shorter pauses within stronger collocated bigrams.The effect was greater in IWA.These results may reflect how residual language in IWA is expressed in stronger collocated word combinations and, therefore, helps decrease the processing demands in connected speech, as evidenced by shorter pauses.According to usage-based theories (Bybee & Beckner, 2015;Christiansen & Chater, 2018;Goldberg, 2003;Langacker, 1987a), strongly collocated word combinations may be processed holistically, either via strengthened connections between individual words or as single morpheme-like units (Siyanova-Chanturia et al., 2017;Tremblay & Baayen, 2010).These theories predict greater ease of processing stronger collocations, and, therefore, are supported by our study. However, many neurolinguistic models are based on the distinction between lexicon and grammar.For instance, Friederici (2011) claimed that syntactic processes are strongly supported by frontal left-hemisphere areas, whereas processing of lexical-semantic information is based on frontotemporal areas, assuming distinctively different neural and cognitive processing for both functions.Hagoort (2013) makes a distinction between left frontal areas which support combination of linguistic units (e.g., words), and left temporal areas which support storage of these units.We need to consider whether usage properties such as collocation strength can be integrated into these models, or whether they call for different models entirely.For example, Van Lancker Sidtis (2012) observed a decrease in familiar language use in individuals with right hemisphere damage or Parkinson's disease, suggesting that the representation of familiar language can be supported by the right hemisphere, as well as subcortical nuclei.This view is supported by some other evidence: Skipper et al. (2022) asked NBD individuals to repeat a number of sentences over 15 days.Subsequently, the participants were asked to listen to the learned sentences and novel ones during fMRI scanning.The repeated sentences, compared to novel sentences, elicited stronger activation of the bilateral sensorimotor areas and the right hemisphere frontal gyrus.Further evidence has been provided by studies employing eventrelated potentials (ERP).Siyanova-Chanturia et al. ( 2017) compared strong collocations (e.g., "knife and fork") with rarer combinations of semantically related nouns (e.g., "spoon and fork").A larger right-lateralised P300, followed by a smaller N400, was elicited in response to the usual collocations.Here, the larger P300 effect is based on high predictability, such as within multiword expressions, whereas the smaller N400 effect reflects easier semantic integration (see also Vespignani et al., 2010).We believe that these observed differences in neurological processing of familiar language are related with our behavioural finding that stronger collocations involve fewer and shorter speech pauses. While results based on LMMs (which we pre-registered) and post-hoc hurdle-Gaussian models were largely similar, we did observe discrepancies.In LMMs, the interaction between lexical frequency and group revealed a significant effect on pauses.In hurdle models, it was not significant.Furthermore, the interaction between collocation strength and group was strong in LMMs, but moderate in hurdle models.While LMMs can be robust even when distributions are not parametric, hurdle models are better suited.Nevertheless, use of both model types is not elegant.Future studies with similar designs should provide clarity, and we advise focusing on hurdle models. To our surprise, aphasia severity did not have a significant effect when added to models which only included IWA.However, we would advise against the conclusion that severity is not associated with pause duration or the effects of lexical frequency or collocation strength.As there was an effect of aphasia in group comparisons, it would be reasonable to expect an effect of severity.We note that our sample of IWA represented the mild-to-moderate end of aphasia severity, Finally, and related to clinical practice, many aphasia assessments and therapy protocols focus on single-word properties (Bruehl et al., 2023).Clinical research could consider word collocations both in assessment and therapy.For instance, Melody Intonation Therapy employs high frequency formulas, based on the observation of production of song lyrics even in severe aphasia, with this resilience stemming from their likely storage in a holistic manner (Stahl & Kotz, 2014).Unification Therapy Integrating Lexicon and Sentences (UTILISE) includes high-frequency constructions as early training items and introduces variations later (Varley et al., 2020).For example, the construction I made it (PERSON made THING) can be systematically loosened and lengthened with new lexical items inserted into the PERSON (He made it), THING (I made coffee) slots, or an adjunct added (She made coffee today).Finally, the availability of new technologies allows clinicians to efficiently and accurately obtain the statistical properties of language and/or duration of pauses.In general, aphasia assessment includes measures that might reflect fluency in language production (e.g., mean length of utterance) but do not allow naturalistic labelling of fluency, such as pause duration.Pauses in speech, given their likely sensitivity to collocation strength, could be considered an outcome measure in interventions. Limitations Analysis of combinations was restricted to bigrams.However, collocation strength within larger units may contribute further insights.Another limitation is that we did not address the impact of other linguistic phenomena on pauses.Our model does not consider the type of syntactic structure, specific parts of speech beyond the dichotomy of content and function words, or other usage variables such as age of acquisition of lexical individual words and phrases.The effect of age of acquisition has been established at the lexical level but is underexplored at the level of word combinations (Arnon et al., 2017).Also, while it is common to consider frequency effects at the word level, frequency and co-occurrence effects may manifest at other levels, such as phonemes and morphemes.Further studies might compare the collocation strength of sublexical units to bigrams or multiword utterances and their influence on cognitive effort in speech.Future research could add such variables; however, more complex models require larger data sets and face the great challenge of frequencybased variables often being strongly correlated.A further limitation concerns the combination of filled and silent pauses, which could reflect different cognitive processes.In our study, we combined these to not further complicate models, and after analysis, we understand that our data would not be suitable for such a comparison because of the relatively small number of filled pauses.However, pause type may interact with our predictors. When registering the study, we were worried about convergence and chose a less complex structure for our models' random effects.While this is a valid decision, it made models less conservative.Because we already added Hurdle models post-hoc, we chose not to complicate our report by exploring model variations with different random effect structures.While we assume that the stronger effects reported here would survive more conservative models, replication using different models would help further examination. The final limitation concerns our removal of bigrams which do not occur in our reference corpus or are ungrammatical (or both).The removal of ungrammatical utterances, while necessary when using our frequentist methods in natural speech, might distort evaluation of formulaic output in language production.While ungrammatical combinations are less likely to be formulaic and may not even involve higherlevel speech planning in NBD (Ramanarayanan et al., 2009), it is possible that ungrammatical formulas are developed in IWA as compensation for production difficulties.Repetition or pragmatically specific use of the ungrammatical expression could result in holistic representation.This possibility is currently underexplored and could be addressed by future research. While important questions about pausing behaviour in language production remain, our study demonstrates that the inclusion of usage-frequency variables, including collocation strength at a multiword level, can help understand which aspects of language forms affect processing demands.The FLAT is available here: https://osf.io/v8mg9/.Legal copyright restrictions prevent public availability of BNT and TROG test materials. Open practices The study in this article has earned Preregistered badges for transparent practices.The preregistered studies are available at: https://osf.io/v8mg9/. Fig. 1 e Fig.1e Example of annotation in the ELAN software, using two tiers: "Type" classifies the type of pause (silent pause, filled pause, filled-silent/silent-filled pause or no pause), and "Transcript" displays the transcription of each narration. Fig. 2 e Fig. 2 e Predicted values of the interaction between Collocation Strength and Pause Duration for the best fitting model (Model 2).For both groups, Pause Duration was shorter within bigrams with high Collocation Strength values.The effect is greater for the IWA group. Table 1 e NBD individuals and IWA sociodemographic data and IWA language assessment. Group comparisons were carried out using t-tests (* ¼ significantly different at p < .05).BNT ¼ Boston Naming Test, used to assess lexical retrieval; CAT ¼ Comprehensive Aphasia Test, subtests used to determine extent of lexical and grammatical impairment.Language assessment showed that IWA group had mild to moderate language impairment.co r t e x x x x ( x x x x ) x x x CORTEX3956_proof ■ 13 July 2024 ■ 3/12Please cite this article as: Bello-Lepe, S., et al., Speech pauses in speakers with and without aphasia: A usage-based approach, Cortex, https://doi.org/10.1016/j.cortex.2024.06.012 Table 2 e Example of raw data for the utterance "they have a fish", generated from ELAN annotation (pause type and pause ¼ 9 out 1475; .61%)and above 157.29 (N ¼ 13 out of 1475; .88%),G(5.02) ¼ .97,p < .001.These values demonstrate greater variance of collocation strength within the NBD group.Outlier values were removed from the statistical analysis. Table 3 e Means for pause duration, type of pauses, PoS distributions, lexical frequency and collocation strength values by group.4, which included all two-way and the three-way interaction, and was significantly stronger than the best model with a single two-way interaction (X 2 (3) ¼ 14.20, p ¼ .003).That model showed significant effects of Group, Lexical Frequency, and significant interactions between Lexical Frequency and PoS (b ¼ .26,t ¼ 2.36, p ¼ .019),as frequency effects were stronger for content words, and Group and PoS (b ¼ .16,t ¼ 2.01, p ¼ .045 Z ¼ À1.70, p ¼ .02,r¼.18Thesection on pauses contains proportions of filent pauses, filled pauses, filled þ silent pauses and no pauses between words.The median pause duration between words for NBD individuals is close to zero because most times, they made no pause between words, resulting in zeroinflated distributions (see 2.1 and 3.1.6).IQR ¼ Interquartile range.Pauses, PoS and lexical frequency were calculated before removal of ungrammatical bigrams and lexical frequency and collocation strength outliers.Mean and median lexical frequency was calculated after removal of outlier values.Mean and median collocation strength (t-scores) was calculated after removal of ungrammatical bigrams and outlier values.Please cite this article as: Bello-Lepe, S., et al., Speech pauses in speakers with and without aphasia: A usage-based approach, Cortex, https://doi.org/10.1016/j.cortex.2024.06.012 was Model Table 5 e Model 1 for Pause Duration predicted by Lexical Frequency and multiple interactions. Table 6 e Best fitting model for Pause Duration including Collocation Strength.Model 2 is characterized by significant interactions between Group and Collocation Strength, and the main effect of Group. Model 2 structure: Pause Duration ~Collocation Strength * Group þ PoS þ (1 j Speaker) Please cite this article as: Bello-Lepe, S., et al., Speech pauses in speakers with and without aphasia: A usage-based approach, Cortex, https://doi.org/10.1016/j.cortex.2024.06.012 and it is possible that a more heterogeneous sample could detect such an effect.
2024-07-14T13:12:54.500Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "24799d479f3011626aec10aee2f41b7ff8e69f75", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1016/j.cortex.2024.06.012", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "24799d479f3011626aec10aee2f41b7ff8e69f75", "s2fieldsofstudy": [ "Linguistics", "Medicine", "Psychology" ], "extfieldsofstudy": [] }
257706245
pes2o/s2orc
v3-fos-license
The Past and Future of East Asia to Italy: Nearly Global VLBI We present here the East Asia to Italy Nearly Global VLBI (EATING VLBI) project. How this project started and the evolution of the international collaboration between Korean, Japanese, and Italian researchers to study compact sources with VLBI observations is reported. Problems related to the synchronization of the very different arrays and technical details of the telescopes involved are presented and discussed. The relatively high observation frequency (22 and 43 GHz) and the long baselines between Italy and East Asia produced high-resolution images. We present example images to demonstrate the typical performance of the EATING VLBI array. The results attracted international researchers and the collaboration is growing, now including Chinese and Russian stations. New in progress projects are discussed and future possibilities with a larger number of telescopes and a better frequency coverage are briefly discussed herein. Introduction The East Asia To Italy: Nearly Global VLBI (EATING VLBI) is a collaboration among the radio astronomy groups of the Italian Istituto Nazionale di Astrofisica (INAF), National Astronomical Observatory of Japan (NAOJ), the Korea Astronomy and Space Institute (KASI) and now all institutions associated with the East Asian VLBI Network (EAVN). This collaboration started in 2010. The INAF Institute of Radio Astronomy (IRA), the Bologna University, and the Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA) submitted a project to the Italian Ministry of Foreign Affairs (MAE) to foster scientific collaboration in radio astronomy between Italy and Japan. There were six principal investigators (PIs), including G. Giovannini and Y. Murata. The project was funded. Thanks to these funds, people from Japan and from Italy started exchanging visits, primarily between Bologna and Tokyo. Visits were very fruitful and the collaboration project started. At that time, the main topic was to start a collaboration in preparation of the launch of the VLBI Space Observatory Programme-2 (VSOP2) satellite and next to study compact active objects with VLBI observations using telescopes in Japan and Italy to increase the sensitivity and angular resolution. After the termination of the VSOP2 project (July 2011), the collaboration project continued with the aim to study active galactic nuclei (AGN) at high resolution and to enhance the collaboration and experience in high-frequency observations. Other groups joined this project as NAOJ and other people from Universities in Japan, Astrophysics researchers from Korea and East Asia and other INAF observatories/Universities in Italy. The observing array now includes the INAF radio telescopes, the Japanese and Korean joint VLBI network (KVN and VERA array, KaVA), and other telescopes (EAVN). The aim of this paper was to review the basic elements of this relevant collaboration presenting technical and scientific results to discuss the main obtained results and the possible future developments. An important point of this collaboration is also the friendship among people in Italy and East Asia (mostly Japan, Korea, and China) important from the human point of view but also important to obtain highly scientific results and a fruitful collaboration. In the next sections, we will give a short history of this project (Section 2), the technical properties of the available array (Section 3), the scientific results (Section 4), and the expected developments in the near future (Section 5). Project History In 2010, thanks to funds of the Italian MAE to improve the scientific collaboration between Italy and Japan, we started an exchange program to give the possibility to researchers from Japan to visit Bologna and to researchers from Italy to visit ISAS/JAXA and NAOJ in Japan. This project was funded for 5 years and it was a good opportunity to have in Bologna a large group of researchers from Japan visit for one month or even longer. Among them, we quote Hiroshi Nagai, Kazuhiro Hada, Shoko Koyama, Akihiro Doi, and Motoki Kino. Monica Orienti, Gabriele Giovannini, Filippo D'Ammando and Marcello Giroletti also visited ISAS/JAXA and NAOJ for some time. The main scientific topic discussed was an exchange of the experience of Italian researchers in high-resolution VLBI observations at low/intermediate frequency (L and C bands) including the correlation with gamma-ray emission from AGN, and the experience in space VLBI and high-frequency VLBI observations with VERA from Japanese researchers. In 2012, Kazuhiro Hada obtained a Canon Grant to spend one year in Bologna. A very important step in these visits was a meeting at the beginning of 2011 with Honma-san at NAOJ where, for the first time, the possibility of common VLBI observations in both Italy and Japan to increase the sensitivity and angular resolution of single arrays was discussed. The possibility of VERA + IRA telescopes VLBI observations started. In October 2011, the Japanese and Italian governments organized an important meeting in Tokyo to review the strong collaboration present in many fields. Murata san and G. Giovannini had the opportunity to present the astrophysical collaboration and the future project of common observations. An important result was also the starting of a new project led by H. Nagai, in collaboration with NAOJ and the IRA/INAF-VLBI group, named GENJI (Gamma-ray Emitting Notable-AGNs Monitoring by Japanese VLBI). This program started in 2012 to date with the aim to observe gamma-ray AGNs, using the VERA array at 22 GHz [1]. Such a frequent monitor at high resolution is a unique and very interesting observational project in light of important results from the Fermi gamma-ray space observatory. In October 2012, the first meeting with the name 'EATING VLBI' was held in the CNR research area in Bologna. The meeting provided an opportunity to make this collaboration visible to the VLBI community and to promote joint research using this new VLBI network. On 19 February 2013, the first common VLBI observation was organized between an Italian radio telescope, in Noto, and the VERA array, a few hours of observations of 3C 84 and other calibrator sources to check frequency receivers. The experiment did not yield fringes, as the Noto receiver was not cooled and the sensitivity was too low. Three more testing experiments in April (2) and May failed because of problems in the data format and registration. In 2014, we investigated a different registration format and correlator mode and we organized a test observation of a few hours on 19 Feb 2015 using VERA in Japan and all three radio telescopes, namely Medicina (Mc), Noto (Nt), and Sardinia (Sr) in Italy. The experiment was carried out at 22 GHz with a 1 Gbps recording rate. Finally, fringes were detected! The result was positive but not as clean as we hoped, but it showed that we were on the right track. On 25 March 2015, during the EU-Korea Innovation Day at the suggestion of the Italian Embassy in Seoul, information and details of the Italian telescopes were presented and a strong collaboration between Japan, Korea (KaVA), and Italy started. In July 2015, we submitted the proposal for a collaboration between Italy and Korea to the Italian government, which was similar to that obtained for Italy-Japan. This proposal was not funded because of the strong limitation of funds, but it was submitted to a call of the NST for three years and approximately EUR 200 thousand. On 5 April 2016, a long (10 h) observation using VERA, Mc, Nt, and Sr at 22 GHz produced good fringes! (see Figure 1) and on 23 November 2016 Bong Won Sohn, wrote: Dear colleagues, I am happy to deliver good news which I just received. Our proposal is selected! Thanks to these funds and the starting of EAVN operations, a real EATING activity started adding Italian telescopes to EAVN telescopes. Arrays and Correlator In Table 1, we list the available arrays and their performances, giving reference values for 1 h 1 Gbps observations at 22 GHz. In the following subsections, we provide more information about the single arrays, their combinations, and the correlator facilities. Italian Telescopes INAF operates three fully steerable parabolic radio telescopes: two 32 m dishes in Medicina (Mc) and Noto (Nt), and the 64 m Sardinia radio telescope (SRT, or Sr). The locations of the stations are shown in Figure 2 and their coordinates are given in Table 2. The Mc and Nt parabolas were designed and built in the 1980s primarily for VLBI operations in the context of the European VLBI Network (EVN) activities in the cm-wavelength domain. Over the years, they received hardware and software upgrades allowing them to operate with good efficiency, also as stand-alone instruments in continuum [2,3] and spectral lines [4], and up to 43 GHz in Nt thanks to the active surface. The SRT is a more recent project, completed in 2013, and conceived with the goal of operating both as a single dish and as a sensitive VLBI element between 0.3 and 100 GHz [5]. All three stations are connected with broadband optical fibers and can directly send data to the storage units and software correlator in Bologna. The baseline lengths and reference sensitivities are given in Table 3. The three stations are committed to observe the context of the EVN consortium activities for up to ∼70 days per year. The remaining time is devoted to maintenance and development, single-dish activities, geodetic observations in the framework of the International VLBI Service (IVS, for Mc and Nt), support for the Italian Space Agency (ASI, for Sr), and other VLBI projects that can be arranged on an ad hoc basis, such as the recent PRECISE 5 project (Pinpointing REpeating ChIme Sources with EVN dishes) devoted to the localization of fast radio bursts (FRB, [6]). Observation times are offered twice per year through open calls for proposals 6 , and the allocation process is managed by an international panel. This has offered the possibility to carry out pilot experiments in coordination with East Asian partners, and ultimately resulted in a Memorandum of Understanding (MoU) between INAF and KASI, reserving up to 30 h per semester for projects involving the use of INAF and KVN stations, with PIs affiliated with either one of the two institutions (see Section 3.3). The current suite of receivers for each station is indicated in Table 2. However, the situation is in rapid evolution. Driven by several ambitious scientific goals (including compliance with the latest EVN roadmap [7] and the possibility to further develop joint observations with East Asian partners), INAF has obtained substantial financial support through the National Operative Program of the Italian Ministry of University and Research. With this program, INAF recently purchased compact triple-band receivers, developed and built in Korea, to perform simultaneous VLBI observations in the K-, Q-, and W-bands (22/43/86 GHz) with all three of its radio telescopes. Within the same project, funding is available to improve the surface accuracy in Medicina (to allow W-band observations) and the Italian VLBI software correlator. Work is in progress, and first test observations could start in spring 2023. KVN and VERA Array (KaVA) and East Asian VLBI Network (EAVN) KaVA is a VLBI array that combines KVN and VERA. Its baseline lengths range from 305 to 2270 km [8]. The observation frequencies are 22 and 43 GHz with the maximum angular resolution of 1.24 mas and 0.63 mas at each frequency. The combined KaVA compensates for the limitations of each array, namely the low spatial resolution of KVN and the low sensitivity to extended structure of VERA [8,9]. The operation format of KaVA was naturally inherited to EAVN when the latter became operational [10]. EAVN is an international VLBI facility in East Asia and it is operated under mutual collaboration between East Asian countries, as well as part of Southeast Asian and European countries. EAVN consists of 16 telescopes (4 in China, 8 in Japan, and 4 in Korea) and 3 correlator sites (Shanghai in China, Mizusawa in Japan, and Daejeon in Korea) in East Asia [10,11]. EAVN is currently operated in three frequency bands, 6.7, 22, and 43 GHz with the maximum baseline length of 5078 km between the Urumqi and Ogasawara telescopes, corresponding to an angular resolution of 1.82, 0.55, and 0.28 mas at 6.7, 22, and 43 GHz, respectively. For more details of EAVN, see [11]. EAVN provides part of its observing time as 'the EATING VLBI' session from the second half of 2020 with the maximum of total observation time of 30 h in a semester (see next Section). As of the first half of 2022, thirty-seven-epoch EATING VLBI observations were conducted with the total observing time of 275 h, whereas Italian telescopes joined those sessions with 135 h. A major limitation is the limited source common visibility from Italian and East Asia telescopes. EATING VLBI East Asia To Italy: Nearly Global VLBI (EATING VLBI) is a Eurasian VLBI collaboration of East Asian VLBI Network and the Italian INAF radio telescopes [12] and −30 • , respectively. Curves with orange and red indicate baselines related to EAVN and Italian stations, respectively. Additionally, here we also include 3 Russian stations (Badary, Svetloe, Zelenchuk, with blue-colored curves) that occasionally join EATING VLBI sessions from 2020. The Bologna Correlator The Bologna correlator is currently made of 3 storage and 2 computing nodes. The mixed storage and computing nodes have a scratch 11 TB SSD storage and 100 TB SAS disk storage, whereas the computing nodes have a 14 TB scratch storage. There is an additional dedicated storage node with 200 TB available. With the current setup, we are able to correlate data sampled at 1 Gbit/s at roughly half of the observation time. The nodes are interconnected via infiniband 40 Gbit and have a 10 Gbit connection to the National research and education network (NREN) coordinated by 'Gruppo per l'Armonizzazione della Rete della Ricerca' (GARR). The planned future upgrades will be to acquire further computing nodes with NVME and SSD storage and integrate the current disk storage into a network file system such as lustre. The Daejeon Correlator at Korea-Japan Correlation Center The Daejeon Correlator of the Korea-Japan Correlation Center (KJCC) at KASI is a hardware FX correlator which is jointly funded by KASI and NAOJ. It currently processes KVN's international collaboration observations, including the East Asian VLBI Network (EAVN), its precursor KaVA, and the EATING VLBI Network. A maximum of 16 stations (single polarization mode) and a maximum of 8192 Mbps input data rates per station can be processed. Several VLBI data playbacks can be used, such as Mark5B, VERA2000, and OCTADISK. In order to process data from these various playback systems, the Daejeon Correlator uses an intermediate buffer system called the Raw VLBI Data Buffer (RVDB) which is developed by NAOJ. At 22 GHz, KJCC has 0.05 km s −1 velocity resolution, ±36,000 km maximum delay and 1.075 kHz maximum fringe tracking 7 . All data have been transferred by optical fiber connection. Key Targets and Examples of EATING Images Here, we present example EATING VLBI images on various sources to demonstrate the typical performance of the EATING VLBI array. Here, we select the data from two representative sessions, for which the observations were largely successful over the array. These are part of ongoing EATING VLBI monitoring programs on M 87 (Session A) and 1H 0323+342 (Session B), for which dedicated studies and detailed data analysis procedures will be published in separate papers. The basic information of these two sessions is summarized in Table 4. In what follows, we describe brief notes and prospects on some key targets of the EATING VLBI program. To produce these images, initial calibration was performed in AIPS, and subsequent self-calibration and imaging in DIFMAP. , shows a prominent jet up to several kpc away from the center of the galaxy, which can be observed at radio, optical, and X-rays. Since its discovery [14], the jet of M 87 has been extensively monitored over a century. Recently, the Event Horizon Telescope (EHT) Collaboration has successfully imaged the black hole shadow of M 87 and constrained its mass to be MBH ∼ 6.5 × 10 9 M0 [15]. Beyond the horizon scales, the Global Millimeter-VLBI Array (GMVA) observations at 86 GHz revealed that the jet is limb-brightened, suggesting multiple layers in the jet [16,17]. On scales beyond ∼100 Schwarzschild radii (Rs = 2GM / c 2 ), the M 87 jet has been intensively studied with centimeter-VLBI facilities such as VLBA, EVN, and KaVA/EAVN. There is growing evidence that the jet is gradually accelerated over de-projected distances between ∼10 2 Rs and ∼10 6 Rs from the jet base [18][19][20][21][22]. The acceleration region is characterized by a parabolic shape [23,24], indicating that jet acceleration and collimation are intimately related as predicted by the magneto-hydrodynamic (MHD) acceleration models. Nevertheless, the velocity profile of the M 87 jet appears to be more complicated. For example, the extrapolation of observed velocities at distances <10 2 Rs expects subluminal motions, while GRMHD simulations [24,25] predict superluminal speeds. Furthermore, the jet acceleration is less efficient compared to the prediction from a highly magnetized jet, which is suggested by other previous observations [17,26]. Indeed, there is still a substantial discrepancy between the observations and our understanding. In order to overcome the previous limitations, it is required to expand the M 87 monitoring with a high angular resolution, sensitivity, and an observing cadence. EATING VLBI at 22 GHz is currently the only facility that allows us to regularly monitor the innermost jet regions where the initial acceleration takes place. In fact, the angular resolution of EATING along the M 87 jet direction is 0.15 mas (∼67Rs) on the sky, which is suitable for reliably investigating the region within <10 2 Rs. The first EATING VLBI experiment towards M 87 was performed in 2017 [27]. From the end of 2019, we started our regular EATING monitoring program on a long-term basis. Our preliminary analysis of inner 1 mas kinematics from multi-epoch images detected possible superluminal jet features, but it is not conclusive due to the limited number of epochs and cadence. More detailed analysis is on-going with a typical observation cadence of ∼3 week, and the future monitoring will further allow us to better understand the M 87 jet physics. 1H 0323+342 1H 0323+342 is a well-known Narrow-line Seyfert 1 galaxy (NLSy1). This source is known as the nearest (z = 0.063) γ-ray detected NLSy1s [28] and is one of the very few NLSy1s where the host galaxy is resolved [29]. Thus, this source is a unique target that allows us to probe the vicinity of the central engine at the highest linear resolution among γray active NLSy1s (1 mas = 1.2 pc). The pc-scale structure of this source was first examined by [30]. More recently, an extensive VLBA analysis by [31] discovered a parabolic shape of the inner jet, resolving a collimation zone near the jet base. In addition, the collimation zone ends up with a bright quasi-stationary component "S" at 7 mas from the core, indicating that S represents a recollimation shock that could be associated with the site of high energy γ-ray emission [31]. Nevertheless, the structure and dynamics of the innermost jet regions is still controversial. While the previous multi-epoch VLBA monitoring of this jet reported a highly superluminal motion up to ∼9 c [32,33], some of the previous studies claim the possible presence of another stationary feature at 0.3-0.7 mas from the core [30,34]. If this is real, this upstream feature might represent the first recollimation shock rather than S and propose a new site of γ-ray emission. However, previous VLBI images were not conclusive due to insufficient resolution (at ≤22 GHz) or sensitivity (at 43 GHz). To better probe the innermost regions, EATING VLBI at 22 GHz is an optimal approach thanks to its high resolution, high sensitivity, and high-cadense monitoring capability. From October 2019, we started performing the EATING monitoring observations of this source at typical intervals of 3 months, completing 11 sessions to date. Through this program, the innermost regions of 1H 0323+342 were robustly resolved at 0.15 mas (∼0.18 pc) scales ( Figure 5). Remarkably, the EATING images unambiguously detected a bright feature located at ∼0.5-0.9 mas from the core, and our preliminary analysis of the multi-epoch images indicate the quite stationary nature of this feature. Therefore, we are indeed beginning to constrain the innermost structure of this source. More detailed analysis is underway and ongoing monitoring will further allow us to test the MHD jet acceleration/collimation paradigm and the Doppler boosting and Lorentz factors that are key parameters to constrain the site of γ-ray emission. Table 5. Table 5. 3C84 The nearby radio galaxy 3C 84 (z = 0.0176) at the center of the Perseus cluster is known as an excellent laboratory for exploring the physics of energy transport by radio lobes at parsec scales. At approximately 2005, it was reported that the radio flux started to increase again [35]. VLBI observations revealed that this flare originated from within the central pc-scale core, accompanying the ejection of a new jet component known as C3 expanding in the Southern direction with respect to the core [36,37]. This new component appeared from the south of the core at approximately 2003 and it propagates southward and becomes brighter [38]. EATING VLBI, with its higher spatial resolution in the east-west (EW) direction, has the advantage of reproducing the fine structure of this object. While physical properties around the C3 component are discussed based on the simple kinematics of the brightness peak of C3 when using VERA and KaVA [36,[38][39][40], the new EATING image shows a spatially resolved fine structure around the brightness peak ( Figure 6). This is a good example of the advantage of EATING over VERA and KaVA. The high spatial resolution also allows us to resolve the core component. The EATING image shows that the core is elongated towards the EW direction, more than the beam-size, which is consistent with the double nuclear structure in core region [41]. EATING also has the advantage in imaging extended structures thanks to EAVN-originated short baselines. In contemporaneous data, VLBA did not adequately capture the structure of the entire extended radio lobe [40]. In contrast, EATING was able to capture the overall structure of this newborn radio lobe and its internal sub-structures. A counter radio lobe on the north side is also shown in the image. This structure is spatially resolved in agreement with its internal structure in the east-west direction [42,43]. Interestingly, the east-west structure of the counter-jet appears to be different when compared to archival images from the Boston Univ. blazar monitoring of the same period: November 2019 (see https://www.bu.edu/blazars/research.html, 15 January 2023). This may suggest changes in the apparent structure of the counter-jet on a short-time scale. This would be an interesting future research topic. B0218+357 B0218+357 is a distant (z = 0.944) flat spectrum radio quasar (FSRQ) gravitationally lensed by a foreground (z = 0.685) spiral galaxy B0218+357G. The lensing effect splits the AGN into two lensed images (A and B) separated by 335 mas [44]. The source is also known as one of the few gravitationally lensed quasars where active gamma-ray emission is detected up to GeV/TeV energy bands [45,46], rendering this source a unique target with which one can probe the innermost active regions of AGN via microlensing. From 2017, we started detailed monitoring observations on B0218+357 with KaVA to better constrain the innermost jet structure and its evolution. The initial results are published in [47], where we robustly detected a core-jet structure up to 22/43/86 GHz (KVN only at 86 GHz). Our KaVA monitoring observations were also conducted as part of large multi-wavelength (MWL) campaigns from radio to TeV γ-rays [46], playing a key role in the constraining broadband property from the radio side. Nevertheless, there is still some outstanding mystery left in the previous VLBI images of this source. First, the core-jet morphology in each lensed image remains extremely stable at least over two decades since the first VLBI images of this system were obtained [48,49]. This implies that the jet speed is either very low or the jet component is associated with a stationary shock that is often claimed in jets of active gamma-ray blazars. Second, while the observed magnification ratio of the lensed images of the core is in agreement with a simple lens model, the observed ratio for the jet component (A2/B2∼3-3.3) appears to be smaller than the predicted one, implying an additional effect such as "substructure lensing" by a compact foreground clump may be at work. To better answer these questions, EATING VLBI plays a key role thanks to the higherresolution and higher-cadense monitoring capability. A preliminary EATING image of B0218+357 shows that both of the lensed images are spatially resolved down to 0.15 mas scales ( Figure 6). The jet component in each lensed image was also resolved into substructures. The ongoing multi-epoch analysis of the EATING data will further allow us to constrain the nature of the jet component and its possible motion in great detail. Other Sources Besides the sources mentioned above, we are monitoring a growing number of sources with EATING VLBI (e.g., Figure 6). This includes active gamma-ray blazars (e.g., 3C279 and 3C273), nearby radio galaxies (e.g., 3C120 and 3C111), neutrino-emitting blazars (TXS 0506+056), nearby low-luminosity AGN, etc. High-resolution EATING VLBI observations on these sources allow us to resolve the key regions such as the sites of gamma-ray or neutrino production as well as jet collimation/acceleration scales in the vicinity of the central black holes. The EATING VLBI monitoring observations of multiple jet sources at 22 GHz will also play a complementary role in other existing VLBI jet monitoring programs such as the MOJAVE (Monitoring Jets in Active galactic nuclei with VLBA Experiments) at 15 GHz and the Large VLBA Project BEAM-ME at 43/86 GHz (successor to VLBA-BU-BLAZAR) conducted with VLBA by the Boston University Blazar group. Future Prospects As highlighted in the previous sections, EATING VLBI is now in stable operation and the first scientific outcomes are being produced. EATING is a unique VLBI array that allows regular monitoring with global baselines. Nevertheless, the current capability is limited to K-band, 1 Gbps, and single polarization. To take full advantage of EATING, it is desired to update its observing capability. • Enhance recording rates: Increasing the recording rates of the network is essential to enhance the array sensitivity and expand our science targets into weaker objects. Although EATING is formally limited to 1 Gbps, individual regional VLBI networks are already operating at higher recording rates of up to 4-32 Gbps. Commissioning of wideband EATING VLBI observations is currently ongoing. • Dual-polarization: Polarimetric observations are crucial to investigate the magneticfield properties of relativistic jets. However, some of the primary stations in East Asia have not been capable of dual-polarization until very recently [50]. High-resolution polarimetric EATING will allow us to spatially resolve and monitor the time evolution of the magnetic-field structures in the acceleration and collimation regions of the jets. Dual-polarization is also important to enhance the sensitivity to maser emission. • Expand frequency coverage: While many of the stations in East Asia are capable of observations of up to 43 or 86 GHz bands, compact triple-band (22/43/86 GHz) receivers are being installed at the three Italian telescopes. This will allow us to perform EATING VLBI observations up to 22, 43, and 86 GHz bands quasi-simultaneously, which further increases the angular resolution. An expansion of the EATING frequency coverage to the lower frequency side such as C and L bands is also under consideration. • Expansion of network: Although current EATING has global baselines, they are largely limited to the east-west direction, resulting in highly elongated beam shapes. To expand the north-south baselines, we are currently testing EATING observations in conjunction with telescopes in Australia. We also plan to perform joint EATING observations with Thailand National Radio Telescope in South East Asia, which will nicely fill the UV gap between East Asia and Australia. Summary A collaboration started in 2010 between Italy and Japan in preparation of the VSOP2 launch, which then increased its importance by involving more and more people and institutes in different countries, and by the end of 2012, in a meeting in Bologna, the East Asia To Italy: Nearly Global VLBI (EATING VLBI) collaboration had started. At present, it is a collaboration among the radio astronomy groups of INAF, NAOJ, KASI, and recently, all institutions associated with EAVN. Italian and EAVN telescopes are collaborating and observing times can be requested for common observations. A MoU was signed between INAF and KASI for coordinated proposal calls on 9 April 2019. It recognized mutual interest in conducting VLBI experiments, and long-standing collaboration and collaborative research on supermassive black holes. Both parties agreed upon providing 30 h of observation time per semester for common VLBI observations with the INAF telescopes and the KVN. Data are correlated at the Daejeon correlator. Moreover, more observation time can be requested outside the MoU as general scientific activity at the two time allocation committees. The capability of this 'nearly global' array was shortly shown here but a large increase is expected because of a large technical capability increase: high-frequency receivers in Italian telescopes and a large increase in sensitivity and UV-coverage in East Asia. The collaboration and friendship inside the EATING VLBI collaboration is expected to increase and it is open to new collaborations.
2023-03-24T15:26:20.232Z
2023-03-22T00:00:00.000
{ "year": 2023, "sha1": "c682266ac33b2e7c97dcc61b1757993a498e2267", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4434/11/2/49/pdf?version=1679467485", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "4fc7c7dde6873ef2a96c020c4634e4650c9d6211", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255503443
pes2o/s2orc
v3-fos-license
Desirable Non-Technical Skills for Paramedicine: A Delphi Study Introduction Non-technical skills (NTS) are a causative factor in many adverse events in healthcare. Although this is the case, NTS have been explored in the paramedic literature in isolation, with no current list of desirable paramedic NTS within the literature. This study aims to gather consensus opinions on which NTS are considered important for an operational paramedic. Methods A modified Delphi technique was utilised to achieve the study aim. Participants were required to rate each NTS on a ten-point Likert scale. For an NTS to reach a consensus, it was required to be rated within two Likert scale points of the mode score by 80% of participants. Results There were 17 participants in the Delphi study (n=17). The study ran for a total of three rounds, and 33 of 35 NTS reached consensus. The top five NTS were communication, problem-solving, situational awareness, professionalism, and interpersonal skills. Two NTS did not reach consensus; these were empathy and cognitive offloading. These two did not reach consensus despite being rated six or higher by all participants. Conclusions The results of this Delphi study have created the first expert-based list of important NTS for a paramedic. This will have significant implications for the paramedic field as we now have a foundation of which NTS is vital for a paramedic to complete their duties. These results can begin to form the foundation of a future paramedic behavioural marker systems will improve paramedic performance and ultimately lead to improved patient safety. INTRODUCTION Non-technical skills (NTS) have been defined as the "the cognitive, social, and personal resource skills that complement technical skills, and contribute to safe and efficient task performance".(1) They can be broken into two groups: cognitive and mental, and social and interpersonal.(2) Common NTS include situation awareness, decision making, leadership and teamwork. (3) Non-technical skills can be originated back to 1979; it is from that time that a link between aviation incidents and NTS error became stronger and an emphasis was placed on the identification and training of NTS in aviation. (4) The application of NTS to healthcare occurred when it was recognised that there were similarities between human errors in the operating room and those in aviation. (2,5) Similarly, medical research over 10 years later still supports these findings with 44% of adverse medical events in orthopaedic surgery associated with a breakdown of NTS, situational awareness accounting for more than half of these.(6) Furthermore, research from Japan between 2010 and 2013 identified that 46% of fatalities were secondary to an NTS breakdown; this was opposed to only 5% that occurred as a result of technical skill errors. (7) This highlights that technical skills alone are insufficient for maintaining patient safety or practitioner performance. (8) Anaesthetists and surgeons -like experts in aviationhave recognised that improvement in NTS is imperative to improve performance and ultimately improve patient safety. As such, these professions have created behavioural marker systems in their respective fields to enhance individual NTS. (9,10) Behavioural marker systems (which are created from an empirically based taxonomy of NTS for a particular profession) provide an observation-based system for assessing the effectiveness of NTS in the given environment.(11) Each profession must develop its own NTS taxonomy to create behavioural marker systems. This is so that each element, along with good and poor behaviours, are specified, as they can vary significantly from one technical setting to another.(3) Behavioural marker systems are specific to each domain and thus are required to be developed individually for the domain in which they are used. (8) Paramedics have recognised that there is a place for NTS and further behavioural marker systems in their field.(12) However, with the emerging introduction, application and knowledge of NTS in the field of paramedicine, there is currently little evidence to support which NTS is important to the role of a paramedic. (13) The governing bodies of numerous ambulance services worldwide refer to the importance of attributes that would be considered NTS. (14)(15)(16)(17) These share similarities to those in anaesthetics and surgery with an emphasis on communication, teamwork, leadership and situational awareness.(9,10) However, paramedics differ from these other professions as they are required, due to the nature of the job, to operate in uncontrolled and often unpredictable environments. Furthermore, paramedics are in a unique position in which they are required to elicit information while controlling these environments that often involve multiple people and other emergency services. (18) With such a strong link of NTS breakdowns to adverse events in the medical literature, assumptions can be made that similar errors would also result in adverse events in the out-of-hospital environment. It is for this reason that we must begin to identify which NTS are important for an operational paramedic. This is so that paramedicine can begin to make inroads into creating their behavioural marker systems to improve paramedic performance and, ultimately, patient safety (12). A previously conducted scoping review of NTS in paramedicine identified that there is a significant amount of literature exploring individual NTS in the paramedic field (13). However, the majority of this work is on specific NTS in isolation, with little literature supporting a NTS taxonomy for paramedics. The findings from this scoping review provided a foundation of empirically-based NTS for paramedics. This study will build on that list by determining whether the empirically-based NTS are important to the role of a paramedic. This study aims to gather consensus opinions on which NTS are considered important for an operational paramedic. Design First introduced in 1950, the Delphi technique is an anonymous group information gathering technique designed to obtain the most reliable consensus opinion of a group of experts. (19) The Delphi study is different to other group data-gathering techniques in that it provides participants with feedback on iterations. The feedback process encourages participants to review and evaluate their initial judgements based on the feedback provided by other Delphi participants. (20) The Delphi technique involves multiple rounds. Round 1 traditionally begins as an idea gathering round; however, it is an acceptable variation to begin Round 1 with a structured questionnaire created from a previous review of the literature. (20,21) Data gathered from Round 1 are collated to develop a questionnaire that is distributed to participants in Round 2. Participants are required to answer the questionnaire in which the results are collated and redistributed to participants in Round 3. This encourages participants to re-evaluate their initial responses and change them if they feel necessary. This process continues for subsequent rounds until consensus is met or a pre-determined maximum number of rounds is reached. (20,21) The Delphi technique differs from other group data gathering techniques in that individual responses are anonymous. The advantage of this is that individuals can provide an honest opinion of their thoughts without being influenced by dominant individuals, distracted by noise or conforming to the group opinion. (22) The conducting and reporting Delphi studies (CREDES) guideline was followed to ensure a systematic approach throughout. (23) Process A snowball sampling technique was used to recruit participants for this study. Participants were recruited from Ambulance Victoria, an emergency ambulance service in Australia. A small portion of participants in varied organisational positions were approached by the second author (RB) to participate in the study. These participants then recommended additional participants who met the inclusion criteria. This was done to gather a broad view of which NTS were considered important for a paramedic from different aspects of the organisation. The final participant list was distributed among all authors before participants were approached to ensure they all met the inclusion criteria. Twenty participants were approached to participate in the study, and this is within the range consistent in the literature. (20) To be eligible to participate in the study participants were required to have worked a minimum of 6 years in the organisation and as a paramedic. A modified Delphi technique was utilised for this study as a structured questionnaire was used for the first round. This was developed from a previously conducted NTS literature review. (13) It was decided that for consensus to be reached, an NTS was required to have 80% of participants rank it within two Likert scale points of the mode score. The mode was utilised as it represents the most selected score from the participants and is a more accurate reflection of where the majority of votes sit as opposed to the median and mean. (20,24) Where an NTS had two or more modes, the lowest of the modes was utilised. For the study, an NTS was defined as "the cognitive, social, and personal resource skills that complement technical skills and contribute to safe and efficient task performance". (1) The study concluded with whichever of the following occurred first; all NTS had reached 80% consensus within two Likert scale points, or three rounds of the study were completed. Data collection and analysis were undertaken by both authors (RB, BW); neither author had any direct supervision or power relationships with any participants. Round 1 Participants were asked to rank a list of 26 individual NTS on a 10-point Likert scale from 1 (not important at all) to 10 (highly important). At the end of the round participants were able to include any additional NTS that they considered important to the role of a paramedic for evaluation by participants in future rounds. Round 2 The results from Round 1 were collated, and all NTS that reached 80% consensus were removed from the Round 2 questionnaire. Any additional NTS participants added after Round 1 were added to the Round 2 questionnaire. Participants were again required to rank the NTS list on a 10-point Likert scale. The mode for each NTS that did not reach consensus was provided to participants to allow them to reconsider their feedback and decide whether they wished to change their answer in the coming round. Participants again had the opportunity to add any additional NTS to the list. Round 3 As with the previous round, all NTS that reached 80% consensus in Round 2 were removed from the Round 3 questionnaire, and any additional NTS from Round 2 were added. The mode from the previous two rounds was listed next to the NTS that were distributed in Round 3. Participants were again asked to rank the NTS questionnaire on a 10-point Likert scale. Statistical analysis Non-technical skills were ranked according to three criteria. The primary ranking system was the round in which each NTS met consensus (ie. Round 1 ranked highest). The secondary ranking for each NTS was based according to their mode ranking. Finally, the tertiary ranking system was used when two or more NTS had the same mode rating; in this instance, the percentage of consensus was used to rank the NTS. After the third round, the results from the final round were collated. Any NTS that received a rank of 5 or less was considered not important and not included in the end list. If an NTS scored 6 or higher, it was deemed to be important and was included in the end list. Ethical considerations Independent ethical approval was sought from the Monash University Human Research Ethics Committee (MUHREC) before the commencement of this study. Project ID number 19599. As participants were employed by Ambulance Victoria, governance approval was granted from the organisation. RESULTS Of the participants that were approached to participate in the study (n=20), 17 agreed (n=17). The most represented age of participants was 44 and 49 years (n=6), and the majority of participants had over 15 years' experience as a paramedic (n=13). All participants have direct interaction with operational staff daily and have all worked as operational paramedics. Figure 1 outlines the positions held by participants within the organisation. Round 1 Seventeen participants participated in Round 1. Eight of the 26 NTS in the pre-determined questionnaire reached consensus in Round 1 ( Table 1). Communication was the most highly rated with a mode of 10. The lowest rating NTS in Round 1 was mentoring with a mode of 6. Both empathy and compassion achieved the least amount of consensus with 47.06% agreement with their respective modes. Integrity and bias recognition both were rated the highest with a mode of 10 (Table 3). Two of the listed NTS did not reach 80% consensus over the three rounds. These included empathy and cognitive offloading. Neither NTS received a ranking of 5 or below by any participants across the three rounds, however consensus could not be reached on a ranking. Additionally, data collection indicated there was also no relationship between organisational positions and an individual's ranking of these two NTS. This would indicate that these NTS were ranked from personal experience and that lack of consensus was not due to specific positional demands. This study identified that a total of 35 NTS were considered important for a paramedic, of which 33 reached consensuses. This has provided a ranked expert-based list of important NTS for a paramedic (Table 4). DISCUSSION The results from this study have provided an expertbased list of important NTS for an operational paramedic. This builds on the previous empiricallybased list of NTS and can provide a foundation for NTS work in the paramedic field. (13) The results yielded increased the list of NTS that are considered important to the role of a paramedic. As this list was the opinion of experienced paramedics, it would be prudent to ensure the additional NTS are included in discussions of NTS in paramedics. The authors acknowledge further work is required to reduce this list, however empirical evidence on paramedic NTS taxonomy is limited with opinions based on anecdotal experience. Thus, this first step is required before refinement of the list can occur. With further development through grouping, statistical analysis (eg. factor analysis or cluster analysis) and larger sample sizes, this list of NTS has the potential to be utilised in the creation of behavioural marker systems. ( It was no surprise that 'communication' was ranked as the most important NTS for a paramedic. Paramedics use communication in all facets of their day-to-day work, including communicating with patients, colleagues and other health and emergency service professionals. Furthermore, communication plays an integral part in acquiring and disseminating information which is pivotal to diagnosis and treatment. (25) Communication has been found to benefit other NTS in the setting of paramedicine with communication contributing to improved decision-making, as well as improving rapport and interactions in sensitive situations or with different patient demographics. (26)(27)(28)(29) Thus, communication skills are recognised in multiple governing body position descriptions and the paramedic literature as an essential and desirable NTS. (14)(15)(16)(17)30) There were similarities between other medical field taxonomies and the results of this study. Non-technical skills such as communication, decision making, situational awareness and teamwork, which are consistent in the anaesthetist and surgeon NTS taxonomies, were also considered relevant to the field of paramedics.(10,31) These similarities can be explained due to the transferability of generic NTS, which all play a significant part in improving performance. (4,32) However, this could also be attributed to the similarities between both professions that operate in dynamic and time-sensitive circumstances. Additional NTS were also identified as necessary to the paramedic field that was not included in anaesthetist and surgeon taxonomies. Given the intended broad scope of this Delphi study, this was not unexpected, however for NTS such as problem-solving, professionalism, interpersonal skills and scene management to reach consensus in the first round indicates they play a significant part in a paramedic's ability to execute their job requirements. Some of these skills would not be as important to an anaesthetist or surgeon as they would for a paramedic given the environment they are operating in. The identification of these different NTS highlights the requirement for paramedics to work to create their behavioural marker systems to improve these NTS that are specific to the profession. (12) Empathy and cognitive offloading were unable to reach consensus on their ranking; this was despite both NTS being ranked 6 or higher in each round by all participants. Healthcare research supports the importance of empathetic behaviour and has been linked to a decrease in medical errors, along with increased patient compliance and diagnosis. (33)(34)(35) Additionally, cognitive overload has been attributed to a breakdown of system one (intuitive) and system two (rational) thinking styles, causing individuals to take longer to make analytical decisions. (36,37) Consequently, individuals become more reliant on intuitive thinking; thus, the individual is prone to biased reasoning and risk of making poor decisions. (37) Cognitive offloading can contribute to decreased cognitive load and reduce the effects of cognitive overload, improving decisionmaking. Both empathy and cognitive offloading are recognised as critical NTS in broader healthcare. This study supports that these NTS are essential for a paramedic to complete their duties; however, it appears the extent of that importance was varied among the experts, further research is required to determine the consensus of where they sit with the other listed NTS. Limitations The authors acknowledge that this study is not without limitations. The study population for this study was small, and larger sample size would be required to determine the validity and reliability of the results. Furthermore, the terms utilised in the study may be interpreted differently by individuals. By utilising a predetermined questionnaire, the investigators could introduce bias into the study. One of the benefits of the Delphi technique is that participants have the opportunity to provide feedback after each round. This allowed participants to include any additional NTS and thus would alleviate any potential investigator bias that may have been introduced. Lastly, by removing NTS that reached consensus in the previous round, it prevented a more in-depth statistical analysis and interrater correlation through the study. CONCLUSION The results of this Delphi study have created an expertbased list of important NTS for a paramedic. This will have significant implications for paramedicine as we now have a foundation of which NTS is important for a paramedic to complete their duties. Further research is required to determine the reliability and validity of the results; however, once completed can be utilised to form future paramedic behavioural marker systems that could be implemented to improve paramedic performance of NTS, ultimately leading to improved patient safety.
2023-01-08T14:06:41.521Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "60738e2ab40c661dd007f8bb5edb4e5b89078444", "oa_license": null, "oa_url": "https://ajp.paramedics.org/index.php/ajp/article/download/855/1132", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "60738e2ab40c661dd007f8bb5edb4e5b89078444", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118348227
pes2o/s2orc
v3-fos-license
Spectroscopy of formaldehyde in the 30140-30790cm^-1 range Room-temperature absorption spectroscopy of formaldehyde has been performed in the 30140-30790cm^-1 range. Using tunable ultraviolet continuous-wave laser light, individual rotational lines are well resolved in the Doppler-broadened spectrum. Making use of genetic algorithms, the main features of the spectrum are reproduced. Spectral data is made available as Supporting Information. Introduction The ultraviolet (UV) spectrum of formaldehyde (H 2 CO) has been extensively studied already around the birth of molecular spectroscopy [1]. As one of the simplest polyatomic molecules it can be considered a model system for molecular physics. A variety of studies has been carried out to experimentally determine key parameters for formaldehyde photochemistry such as photodissociation quantum yields and absolute absorption cross sections. These numbers are relevant for atmospheric science, for which formaldehyde is an important molecule. Formaldehyde is present in the atmosphere at concentration of ∼ 50 pptv (parts per trillion by volume) in clean tropospheric air [2] and up to 10-70 ppbv (parts per billion by volume) in the air in urban centers [3,4]. By excitation of theà 1 A 2 ←X 1 A 1 transition in the 260-360 nm wavelength range two dissociation channels [5,6] with high quantum yields [7,8,9,10,11,12] are open: (a) H 2 CO+hν → H 2 + CO and (b) H 2 CO+hν → H + HCO. The reaction channel (a) opens at wavelengths <360 nm, reaction channel (b) opens at wavelengths <330 nm. Formaldehyde is also an interesting molecule for the field of cold polar molecules [13,14]. Chemical reactions at low collision energies become controllable by externally applied electric fields [15,16,17]. An example is the hydrogen abstraction channel in the reaction of formaldehyde with hydroxyl radicals [18]. Due to its large dipole moment and large linear Stark shift [19], cold formaldehyde molecules can be prepared by electrostatic filtering from a thermal gas where fluxes of up to 10 10 s −1 with a mean velocity of 50 m/s corresponding to a translational temperature of ≈ 5 K have been demonstrated [20,21]. The rich ultraviolet spectrum in the range 260-360 nm [28,25,26,27,22,23,24] gives the opportunity to spectroscopically study these velocity-filtered cold guided molecules with high resolution, since this wavelength region is accessible with narrow-bandwidth frequency-doubled continuous-wave (cw) dye lasers. To address individual rotational transitions of these velocity-filtered molecules, it is necessary to predict line positions for states populated in the guided beam with high accuracy, requiring refined rotational constants. The work described in this paper is a necessary prologue to experimentally access the internal state distribution of the slow guided molecules [29]. In this paper we give a detailed description of the experimental setup used for measurements of (weak) absorption spectra in the near-UV spectral region. We compare our measurements to previous data and show the improvement in resolution, which allows identification and precise determination of line positions for individual rotational transitions. The lineshape of isolated lines is well reproduced by a room-temperature Doppler profile. We find deviations to line positions calculated with literature values for rotational constants [27,24]. Using genetic algorithms [31,32,30] for fitting rotational constants of the 2 1 0 4 3 0 and 2 2 0 4 1 0 rovibrational bands, good agreement between the simulation and the measured spectra is achieved over a wide range of the spectrum. However, the region between 30390-30410 cm −1 had to be excluded from the simulation, which might indicate the presence of perturbations. We briefly review how the used setup can be modified for Doppler-free measurements on selected lines of this weak transition [33]. Doppler-free linewidths of 40-50 MHz give a lower bound for the lifetime of the energy levels, which are known to predissociate. Experimental Setup The experimental setup used for our absorption spectroscopy measurements is standard [34]. However, due to the small absorption cross sections of formaldehyde (σ ∼ 10 −19 cm 2 ), measurements must be performed at relatively high densities and long optical path lengths. As shown in Fig. 1, a home-made multipass setup using 2 spherical UV mirrors placed outside the vacuum chamber and one additional retro reflection mirror was used. Laser System Tunable narrow-bandwidth ultraviolet laser light is produced by second-harmonic generation of the output from a continuous-wave ring dye laser (Coherent 899, pumped by a Coherent VERDI) in an external enhancement cavity (Coherent MBD-200). The dye laser is operated with the dye "DCM special" using a stainless steel nozzle optimized for high pressures (9 bar). The frequency of the dye laser is locked to an external temperature-stabilized reference cavity resulting in a linewidth of ≈ 0.5 MHz. A part of the fundamental laser light is split off and sent via optical fibres to a wavemeter (HighFinesse WS7) and to an Invar Fabry-Perot-Interferometer (FPI) with a free spectral range of 1.0024 GHz. The wavemeter used in our experiments has a specified absolute frequency uncertainty of 100 MHz when properly calibrated to a light source of well known frequency, in our case a stabilized HeNe laser (SIOS SL03). We therefore carefully estimate the absolute frequency accuracy of our measurements to be better than 300 MHz = 0.01 cm −1 in the fundamental and hence 0.02 cm −1 in the UV. At wavelengths around 330 nm typical UV output powers are ≈ 20-30 mW at fundamental powers of 300-400 mW. Measurements were performed with UV power around 20 mW. The generated UV light is sent through two cylindrical lenses for astigmatism compensation, which is intrinsic to type II critical phase matching in BBO crystals used for second-harmonic generation at this wavelength. Furthermore, the residual transmission of UV light through two 45 • steering mirrors is monitored on two CCD cameras, which allows compensation of beam pointing effects when changing the laser wavelength. Without realignment, changes of the fundamental wavelength by 0.5 cm −1 lead to a significant difference in signal on the absorption and reference photodiodes due to the large optical path length. By manual realigning of the laser beam onto the two CCD cameras after each wavelength change, no more beam pointing effects Fig. 1. The experimental setup used for room-temperature absorption spectroscopy including laser system and vacuum chamber. A part of the fundamental laser light from the dye laser is sent through optical fibres to a wavemeter and a Fabry-Perot-Interferometer (FPI) for wavelength measurement and calibration of the scan speed. UV laser light is generated using a BBO crystal placed in an external enhancement cavity. After passing the beam pointing correction system, consisting of two CCD cameras and two steering mirrors, and the astigmatism compensation setup, the UV light is sent through the formaldehyde spectroscopy chamber using a multipass setup with an overall path length of 3.15 m. Reference and absorption signals are measured using the reflection from a fused-silica wedge placed near normal incidence. For frequency-modulated Doppler-free measurements an electro-optical modulator (EOM, indicated by the dashed box) is inserted into the retroreflected beam for frequency modulation. were observed. Formaldehyde Spectroscopy Setup Absorption spectroscopy is performed in a vacuum chamber of ∼22.5 cm length. Connected to the vacuum chamber are a turbo molecular pump, a pressure gauge (Pfeiffer Vacuum Compact Full Range Gauge PKR261), a flow valve for formaldehyde input and a flow valve allowing analysis of the chambers contents by a mass spectrometer (Pfeiffer Vacuum Prisma QMS200). The effective pumping speed of the turbo molecular pump can be reduced by a angle valve placed between the recipient and the pump to avoid excess pumping of formaldehyde. With the turbo molecular pump the spectroscopy chamber could be evacuated to a base pressure in the 10 −6 mbar range. Formaldehyde is produced by heating Paraformaldehyde (Sigma-Aldrich) to a temperature of 80-90 • C. To clean the dissociation products and remove unwanted water and polymer rests, the gas is led through a dry-ice cold trap at a temperature of ≈ -80 • C [35]. Without the cold trap the viewports of the vacuum chamber became coated with a white layer of paraformaldehyde after several hours of operation. With the cold trap this effect was no longer observable even after extensive use. Since formaldehyde molecules dissociate upon UV excitation, a stable flow of formaldehyde was maintained by slightly opening the valve between the turbo pump and the vacuum chamber. The partial pressures of formaldehyde and its dissociation products were monitored with the mass spectrometer and the formaldehyde input flow rate, as well as the flow to the turbo molecular pump optimized for a constant ratio. In this way measurements could be performed at a constant formaldehyde concentration. Measurements were performed at a constant pressure of 50 Pa in the vacuum chamber with a formaldehyde fraction estimated to be ≥50 %. To achieve a large optical path length in a relatively compact setup, a multipass and retro-reflection configuration was used. The UV laser beam is initially focussed into the vacuum chamber with the beam parameters mode-matched to the effective cavity mode generated by the two multipass mirrors. Two curved mirrors with a radius of curvature (RoC) of 200 mm outside the vacuum chamber are used to refocus the beam into the vacuum chamber, allowing 7 passes. Using an additional mirror (RoC = 500 mm) for retro-reflection, the effective path length can even be doubled, yielding ∼ 3.15 m in a compact ∼ 22.5 cm length vacuum chamber. All curved mirrors are positioned such that their radius of curvature matches the Gaussian mode leaving the spectroscopy setup. For detection a fused-silica wedge was placed near normal incidence which allows picking up part of the original beam used as power reference and part of the retro-reflected beam containing the absorption signal. The picked-up beams are focussed on UV sensitive Si photodiodes (Thorlabs PDA36EC). A linear voltage ramp was applied to the external scan input of the dye laser for sweeping the laser frequency over 20 GHz in the fundamental, resulting in a scan speed of ≈ 45 GHz/s. For each of these sweeps the central frequency of the fundamental was measured with the wavemeter before and after the sweep, giving agreement within 0.001 cm −1 in the fundamental. The FPI transmission was monitored for calibration of the scan speed. Subsequent scans were performed with a central frequency difference of 10 GHz in the fundamental, giving ≈ 50 % overlap for concatenating individual sweeps. Data Acquisition and Data Analysis For data acquisition a 4-channel digital oscilloscope was used. Simultaneously, the external ramp, the FPI transmission peaks, the reference and absorption photodiode signals were recorded. For later data analysis the channels were rebinned to a resolution of 100 MHz in the UV, which was chosen such that individual rotational transitions, which are Doppler-broadened to 2.4 GHz, could be well resolved. Overlapping adjacent scans using the central frequencies measured with the wavemeter showed good overlap between lines present in both scans and confirmed the central wavelength measurements with the wavemeter. Modifications for Doppler-free measurements The experimental setup used for Doppler-free measurements and the developed analytical model for the amplitude of Doppler-free peaks is described in detail elsewhere [33]. Here only the modifications to the spectroscopy setup are summarized. To ease the detection of weak Doppler-free signals, frequency modulation (FM) spectroscopy [36,37] is performed. For this an electro-optical modulator (EOM, Leysop EM400K) resonantly driven with a frequency of 15.8 MHz for the creation of sidebands is placed in the retroreflected beam (see Fig. 1). Since a demodulation of the signal at 15.8 MHz is necessary, fast photodiodes with a sufficiently high bandwidth are used for detection (Thorlabs PDA155). Furthermore, since theà 1 A 2 ←X 1 A 1 electronic transitions in formaldehyde are weak, high laser powers are needed to reach a significant saturation and discriminate the Lamb dips against the Doppler-broadened background. After careful tuning of the laser system, UV laser powers of 250-350 mW were available for this saturation-spectroscopy experiment. Results and Discussion Previous measurements of theà 1 A 2 ←X 1 A 1 rovibrational band of formaldehyde aimed at determination of absolute temperature-dependent absorption cross sections [22,23,24]. Other experiments studied the quantum yield of the dissociation processes following the UV excitation [7,8,9,10,11,12]. These cross sections and quantum yields are important parameters for the description of photochemistry in the atmosphere induced by the sunlight as discussed in the introduction. These measurements have been performed using either broadband light sources and spectrometers ( [22] and refs. therein) with resolutions above 1 cm −1 or pulsed lasers with spectral resolutions of, e.g., 0.35 cm −1 [23,24]. An exception to this are the measurements by Co et al. [38] with a resolution of 0.027 cm −1 , spanning the long wavelength range (351-356 nm) of theà 1 A 2 ←X 1 A 1 band, and the measurements of Schulz et al. of the 2 1 0 4 1 0 and 2 2 0 4 1 0 rovibrational band [39]. Except for these two experiments the resolution of previous measurements is not high enough to resolve individual rotational lines with a Doppler width of 2.4 GHz at room temperature. Comparison to previous studies The improvement in resolution compared to previous studies [24] with a resolution of 0.35 cm −1 is shown in Fig. 2, where the region around the band heads of the r R K ′′ a = 3 progression at ≈ 30389 cm −1 and the r R K ′′ a = 4 progression at ≈ 30397 cm −1 with many lines close together are shown. Individual rotational lines are well resolved and accurate line positions as well as intensities can be determined. For the figures shown (also for Supplementary Information) our measured data was binned to a resolution of 100 MHz = 0.003 cm −1 which is sufficient to resolve the Doppler-broadened lines. Smith et al. [24] used a lineshape function with a full width at half maximum (FWHM) of 0.45 cm −1 to reproduce their measured spectra from simulations. This was surprising since the UV linewidth of their laser sources was expected to be ≤ 0.20 cm −1 as calculated from the measured linewidths of the fundamental. It was speculated that this could either be explained by extremely short excited state lifetimes not in agreement with literature values [5] or by an additional technical broadening. In our measurements we find for isolated lines across the whole studied range a width of 2.4 GHz FWHM, which is in good agreement with a Doppler-broadened line profile at room temperature (293 K). This therefore rules out these extremely short lifetimes and confirmes their assumption of a larger laser linewidth. For the fit of the molecular parameters to the experimental spectrum, Watson's A-reduced Hamiltonian [40,41], including quartic and sextic centrifugal distortion terms has been used with a genetic algorithm (GA) as optimizer. Details about the GA and the cost function used for evaluation of the quality of the fit can be found in Refs. [31,32,30]. Table 1 compiles the so-determined parameters and compares them to the previous parameters for the ground state [27] and the excited vibronic states [24]. The ground-state parameters are in excellent agreement with the values that have been deduced from microwave frequencies and combination differences in infrared and electronic spectra, weighted by appropriate factors [27]. the line positions between 30390 and 30404 cm −1 were observed. The reason why exclusion of a part of the data was necessary might be an interference with the weak vibronic 2 1 0 4 1 0 6 1 0 band with its origin at 30395 cm −1 . Although we included this band with the molecular parameters from [42] as starting values in our fit, no improvement of the cost function could be obtained by this combined fit. Due to the high temperature, high J-states are populated and sextic centrifugal distortion terms in the fit have shown to be necessary. The appropriate nuclear spin statistics (K a even levels = 1; K a odd levels = 3) have been taken into account. For comparison, parts of the simulated and the measured absorption spectrum around the band origin are shown in Fig. 3. The simulated and measured spectrum as well as a complete line list of the calculated transitions is given in the supplementary material. Conclusion and Outlook Doppler-limited measurements covering the 2 1 0 4 3 0 and 2 2 0 4 1 0 rovibrational bands of formaldehyde between 30140 cm −1 and 30800 cm −1 were performed. The enhanced resolution compared to previous studies enables precise determinations of line positions and line strengths. In the Doppler-broadened spectra we find no indications for additional line broadening effects. Doppler-free measurements performed with small modifications to the experimental setup confirm previous studies of excited state lifetimes. The comparison of measured line positions to simulations of the spectrum using literature values for rotational constants shows significant deviations. Using genetic algorithms for the simulation of the rotational spectrum, a better agreement with measured data over a wide range of the spectrum is found. However some regions had to be excluded from the fit indicating perturbations of the rotational structure. Using the detailed understanding about the rotational structure of the formaldehyde ultraviolet spectrum we have performed internal state diagnostics of guided cold formaldehyde beams [29].
2019-04-12T17:26:10.095Z
2008-04-01T00:00:00.000
{ "year": 2008, "sha1": "58744a93ed57a5a45d4e072ed2d445319f1c0c54", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0804.0207", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8d61cd3dbc9491a68bf600ab2c88f27c19ab2ed5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
54705461
pes2o/s2orc
v3-fos-license
Evolutionary Game Analysis of the Supervision Behavior for Public-Private Partnership Projects with Public Participation The public can directly or indirectly participate in the PPP (public-private partnership) projects and then has an impact on the project profit and public or private behavior. To explore the influence of the public participation of the PPP projects supervision behavior, this paper analyzes the mutual evolutionary regularity of the private sector and government supervision department and the influence of public participation level on public and private behavior based on evolutionary game theory. The results show that the supervision strategy is not chosen when the supervision cost of government supervision department is greater than the supervision benefit; it canmake private sector consciously provide the high-quality public products/services with the improvement of public participation level. Therefore, the government should reduce the cost of public participation and improve the public participation level and influence through the application of the Internet, big data, and other advanced technologies, in order to restrain the behavior of the private sector and improve the supervision efficiency. Introduction In recent years, the public-private partnership (PPP) is widely used in power supply [1], water supply [2], sewage and garbage disposal [3,4], traffic [5,6], pipes [7], and other public infrastructure construction fields and has become the main way of governments to provide public service [8,9].Public-private partnership refers to a cooperative relationship formed between government and private organizations, in order to build infrastructure projects or provide some public goods and services [10].The PPP classifies the rights and obligations of both sides through "contract constraint mechanism," to ensure the smooth completion of cooperation and eventually to make all parties achieve more advantageous results than the expected solo engagement.Why is the PPP mode so popular?Scholars think that when it is used correctly, the PPP mode not only can improve the infrastructure supply efficiency, save the total cost, and share the risk, but also can alleviate the problem of insufficient fund of public sector [8,11].But these are built under the premise that the private sector can provide high-quality public products/services.However, if the private sector provides low-quality public products/services to obtain higher profits through reducing maintenance cost and ignoring environmental protection, this will seriously damage the public interests and cause bad social influence [12].Therefore, the effective supervision of the government is indispensable.Nevertheless, with the rapid development of PPP mode, the number of PPP projects increases greatly, making the supervision more challenging than ever, and supervision costs continue to rise.For example, the public sector realizes the supervision of PPP/PFI project mainly through signing a contract with the third-party organization in Britain.The data showed that 17 major government departments spent m1 billion-m1.3 billion on the consulting and temporary workers from 2014 to 2015, bringing heavy burden to the government departments [13].Therefore, how to reduce the supervision costs and improve the supervision efficiency is the important topic that government departments have to face.The ultimate goal of the PPP projects is to provide high-quality low-cost public services for the public, and the public opposition is often one of the important factors of the PPP projects failure, so the public participation plays an important role in the smooth operation of the PPP projects [14][15][16][17].More and more attention is paid to the public participation.Governments ask for the public opinions through various ways, to let the public actively participate in project operation process, and effectively influence and supervise the behavior of the private sector, improving the level of public services [18,19].Then does the public participation have an effect on the behaviors of the public and private parties in the process of the PPP projects supervision?And how can we formulate the reasonable supervision strategy with the public participation and improve the supervision efficiency of the government departments? The aim of this paper is, therefore, to analyze the mutual evolution regularity of the private sector and government supervision department and the influence of public participation level of public and private behavior through the evolutionary game theory, to explore how to formulate the reasonable supervision strategy, and to improve the supervision efficiency with the public participation. This paper is structured as follows.Section 2 reviews the theory regarding PPP and supervision.Section 3 is to build the evolutionary game model with the public participation and analyze the stability of evolutionary strategy.Section 4 is to verify the effectiveness of the model results through the numerical simulation analysis.Section 5 is the conclusions of the paper. Literature Review Generally, a PPP contract will be signed between the government and private sector and usually will consist of planning design, building, operation, and transfer of assets by the private sector [20].Typically, all risks in the project are shared and allocated to the two sectors [21].Therefore, the government can transform some risks to the private partner that is most able to manage them [22].For example, demand risk, technical risk, and financial risk are taken by the private sector, while political risk and legal risk are undertaken by the government [23].In this way, there will be an incentive for the private partner to provide services innovatively and the government will also manage the project professionally [24,25].However, because of the complexity of the project and the conflict of interests, the private sector may not comply with the contract all along.El-Gohary et al. thought that the concerns rose when the asset was owned by the private sector that held the mindset of profit making [26].For instance, they may minimize operating costs to improve return and lower demand risk.Al-Saadi and Abdou proved that proper legal and regulatory framework, risk allocation, and sharing were the critical success factors for PPPs [21].Consequently, appropriate regulation or supervision is needed [27].The supervision gives early warning of any possible risks which may threaten the project and takes actions to deal with them [8]. Supervision of PPP projects mainly includes the project plan achieving the value for money (VFM), admittance supervision of franchisee qualification screening, and performance supervision avoiding low public services level, low operational efficiency, market failure, and other problems [28].Now researches on the supervision problem of PPP projects focus on the importance of project supervision and supervision mechanism design.Decorla-Souza et al. [29] believed that the government supervision level, legal system, and policies had an important effect on the successful implementation of PPP projects.Empirical studies of Yun et al. [30] and Panayides et al. [31] showed that the institutional factors as regulatory quality, market competitiveness, and contract enforcement had significant impact on the success of PPPs.Sabry [32] found that regulatory quality and effective bureaucracy had a positive effect on the performance of PPPs.Koo et al. [33] thought that the participation of the private sector would cause principal-agent problems, which affected the service efficiency, while the effective supervision of the government was a powerful means to reduce the opportunistic behavior of the private sector.However, these literatures mainly focused on the relationship between the government supervision level and project operational performance and demonstrated the importance of project supervision, and there were few researches about the supervision mechanism and strategy.Therefore, some scholars have carried on the further research on how to design the reasonable supervision mechanism.Efremov [34] and Manacorda et al. [35] studied the construction of the supervisory framework of PPP projects from the point of laws and regulations.But they concentrated on the qualitative research.In terms of the quantitative research, He and Fu [36] built the incentive model of the private sector on the premise of public products/service quality and realized the supervision of the service quality and efficiency of private sector to maximize the social welfare level.Greco [37] built the motivation and supervision model based on the principal-agent theory and studied how the government should choose the reasonable motivation and supervision level.However, the above literature ignored the relationship between the public participation and the PPP projects supervision.In fact, the public participation of the decision-making and operation of the PPP projects is a main way to really reflect social needs and public will.And it is crucial for the smooth implementation of the project [38].For example, for the project paid by the public, the public can affect the benefits of a project through buying other services (e.g., for the subway project, the public can choose bus or drive on their own to replace the subway) in the presence of competition [39].In addition, the public can influence the operational efficiency of the project through participating in the PPP projects decision-making [16], and these will have impact on the behavior of the private sector.Therefore, it is necessary to carry out the further research of the supervision problem of PPP projects with the public participation, to analyze how to formulate the reasonable supervision strategy with the public participation. On the other hand, considering that the private sector and government supervision department are individuals with bounded rationality in the actual implementation process of the PPP projects because of information asymmetry, the dynamic change of environment, and the limitation of people's thought, the PPP projects supervision is a game process of continuous learning and dynamic evolution. Therefore, on the basis of the above researches, based on evolutionary game theory, this paper studies the evolutionary track of supervision behavior of the PPP projects with the public participation and analyzes the influence of the public participation level on the behavior of private sector and government supervision department, thereby providing theoretical references for the establishment of supervision strategy. Evolutionary Game Model with the Public Participation. Assuming that the private sector has two strategy choices, probability means providing high-quality public products/services and probability 1 − means providing lowquality public products/services, where the cost of providing high-quality public products/services is and the cost of providing low-quality public products/services is ; 0 means fixed income that can be realized (such as the lowest income promised by the government, etc.); (0 ≤ ≤ 1) means the participation level of the public, and the increased additional income and cost when the private sector provides high-quality public products/services to meet the real needs of the public as much as possible are 1 and , respectively; 2 means the additional loss suffered by the private sector when it provides low-quality public products/services with the public participation.Government supervision department chooses the probability to supervise the private sector or chooses probability 1 − not to supervise.When choosing the supervision strategy, the government supervision department can obtain income (including incentive subsidies to the subordinate departments of superior departments and the recognition of the government supervision department by the public), and the cost that it needs to pay is .When the private sector provides low-quality public products/services which is discovered by government supervision department, the corresponding punishment is .Meanwhile when the private sector provides low-quality public products/services, government supervision department chooses not to supervise (it is assumed here that the higher the public participation level is, the higher the discovered probability is when the private sector and government supervision department get out of line), and the punishment that the government supervision department deserves is .According to the above assumption, when the private sector provides high-quality public products/services and government supervision department chooses supervision strategy, the income of the private sector is 0 − + 1 − and the income of government supervision department is − .When the private sector provides high-quality public products/services and government supervision department chooses nonsupervision strategy, the income of the private sector is 0 − + 1 − and the income of government supervision department is 0. When the private sector provides low-quality public products/services and government supervision department chooses supervision strategy, the income of the private sector is 0 − − 2 − .and the income of the government supervision department is − .When the private sector provides low-quality public products/services and government supervision department chooses nonsupervision strategy, the income of the private sector is 0 − −( 2 + ) and the income of government supervision department is − .It would seem that the game payoff matrix of private sector and government supervision department is showed in Table 1. According to the game matrix (Table 1), the expected revenue of the private sector choosing to provide high-quality public products/services can be obtained as The expected revenue of the private sector choosing to provide low-quality public products/services is The average expected revenue of the private sector is Thus, the replicator dynamic equation of the private sector is Similarly, the replicator dynamic equation of the government supervision department is Therefore, the strategy evolution of the private sector and government supervision department can be described by the differential equation system made of ( 4) and ( 5) in the PPP mode.The stable point of the system is analyzed, and five equilibrium points, 1 (0, 0), 2 (0, 1), 3 (1, 0), 4 (1, 1), and ( * , * ), are obtained, and Based on literature [40], Jacobian matrix of the differential equation system made of ( 4) and ( 5) is The determinant of Jacobian matrix is det : The trace of Jacobian matrix is tr : Stability Analysis of the Evolution Strategy with the Public Participation Case 1.When > , 0 < < min(( + − )/( 1 + 2 + ), ( − )/ ), the supervision cost of government supervision department is greater than the supervision benefit, and the expected punishment when choosing nonsupervision strategy is less than the loss during the supervision due to the low participation level of the public.Meanwhile the loss suffered by the private sector when choosing to provide low-quality public products/services is less than the increased cost when providing high-quality public products/services.At this point, the system is eventually evolved to provide low-quality public products/services by the private sector and choose nonsupervision strategy by the government supervision department. 1 (0, 0) is the only stable point of the system, as shown in Table 2. , the supervision cost of the government supervision department is still greater than the supervision benefit, and the expected punishment when choosing nonsupervision strategy is greater than the loss during the supervision, but the obtained benefit when the private sector provides highquality public products/services is less than the increased cost, so the strategy choice of both parties is uncertain, and the system has no stable point, as shown in Table 3. < 1, the public participation level has reached a certain level, which has a significant impact on the income of the private sector, making the obtained benefit when the private sector provides high-quality public products/services more than the increased cost.But the supervision cost of government supervision department is still greater than the supervision benefit, and the expected punishment when choosing nonsupervision strategy is less than the loss during the supervision.Therefore, the system is eventually evolved to provide high-quality public product/service by the private sector and still choose nonsupervision strategy by the government supervision department. 3 (1, 0) is the only stable point of the system, as shown in Table 4. Case 4. When > , ( + − − )/( 1 + 2 ) < < 1, the public participation level has reached a certain level, which has a significant impact on the income of the private sector, making the obtained benefit when the private sector provides high-quality public products/services greater than the increased cost.The supervision benefit of government supervision department is greater than the supervision cost.Therefore, the system is eventually evolved to provide highquality public products/services by the private sector and choose supervision strategy by the government supervision department. 4 (1, 1) is the only stable point of the system, as shown in Table 5. Numerical Analysis This section will explore the influence of the public participation level, participation cost, and others on the PPP projects supervision process further through the numerical analysis.According to the above assumption, the parameters are selected as follows: 1 = 100, 2 = 120, = 12, = 40, = 80, = 70, and = 20. (1) It can be seen from Figure 1 that when > , with the increase of , the system evolves gradually from point 1 (0, 0) to point 3 (1, 0).That is, when the supervision cost is greater than the supervision benefit, the government supervision department will choose nonsupervision strategy.Moreover, due to the low participation level of the public at first ( < 0.4), the loss suffered by the private sector when choosing to provide low-quality public products/services is less than the increased cost when providing high-quality public products/services; thus providing low-quality public products/services is the best choice.While, with the improvement of the public participation level ( ≥ 0.4), its influence on the benefit of the private sector increases, the loss when providing low-quality public products/services will be heavy.For example, the public will turn to another one if they have choices.Therefore, it can make the private sector consciously provide the high-quality public products/services to some extent. (2) It can be concluded from Figure 2 that the cost of encouraging the public to actively participate in the operation process of PPP projects also has impact on the choice of the private sector.This is because understanding the real demand of the public and meeting the public preference need to take certain time and economic costs.As the cost reduces constantly, the private sector will actively choose the strategy of providing the high-quality public products/services.Therefore, compared with purely relying on improving the punishment on the irregularities of the private sector, the government also can reduce the cost of public participation, relieve the information asymmetry of supplier and demander of the project, and improve the service efficiency through adopting the advanced technologies (such as the Internet and big data related technologies).(3) Figure 3 shows that as the supervision cost of the government supervision department reduces continually, the system eventually tends to be the stable state of 4 (1, 1).That is, when the supervision cost is greater than the supervision benefit (e.g., > 15), the government supervision department will choose nonsupervision strategy, and the private sector will provide low-quality public products/services.10), the government supervision department will choose supervision strategy, and the private sector will provide high-quality public products/services.Therefore, although the public participation can have constraint on the behavior of private sector to some extent, this is based on the effective supervision of local government.Thus, the supervision enthusiasm should be improved through the continuous reduction of supervision cost. Main Conclusions and Management Insights 5.1.Main Conclusions.This paper builds the evolutionary game model of the private sector and government supervision department with the public participation and analyzes the influence of the public participation level on the PPP projects supervision behavior.The research finds that when the nonsupervision strategy is chosen because the supervision cost of government supervision department is greater than the supervision benefit, it can make private sector consciously provide the high-quality public products/services with the improvement of public participation level.And, with the reduction of the cost of public participation, the enthusiasm of the private sector choosing to provide high-quality public products/services increases.Meanwhile, with the continuous reduction of supervision cost, the government supervision department will tend to choose supervision strategy. Implications for Researchers.This study introduces the supervision problem of PPP projects with the public participation based on evolutionary game, establishes the evolutionary relationship between the government supervision department and private sector, and then explores the effect of the public participation on the result of their decision.The enlightenment for researchers is as follows.Firstly, this paper studies the dynamic game relationship between the government supervision department and private sector based on evolutionary game, which can display the evolutional trend of their decision in long term.This is different from previous studies.Secondly, the influence of the public participation on the PPP projects supervision behavior is considered.A new perspective on the supervision problem of PPP projects is offered. Implications for the Government. Actually, the current operation of the PPP projects focuses on the top-down design method, the public participation level is lower, participation cost is higher [16], and the real demand of the public is not fully reflected.And this is one of the main reasons leading to some projects failure.Therefore, at first, in order to reduce the cost of public participation and improve the level of public participation and then influence the benefit of the private sector and restrain opportunism behavior of the private sector, the government should establish a unified PPP projects management platform through the application of the Internet, big data related technologies, and other advanced technologies.Second, the government supervision department can gather the supervision information at a lower cost through the unified PPP projects management platform and improve the supervision efficiency.Third, the competition mechanism should be introduced so as to provide more choices for the public and improve the level of rights allocation of the public.At this point, the private sector will continuously improve the level of public services to obtain more market share. 5 Figure 3 : Figure 3: Evolutionary track with the change of . Table 2 : Local stability analysis of Case 1. Table 3 : Local stability analysis of Case 2. Table 4 : Local stability analysis of Case 3.
2018-12-14T16:18:21.539Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "f6657827c23d760b3b27969d9ddeb77a141e1281", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mpe/2016/1760837.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f6657827c23d760b3b27969d9ddeb77a141e1281", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
11980216
pes2o/s2orc
v3-fos-license
Predictive Factors for Reintubation following Noninvasive Ventilation in Patients with Respiratory Complications after Living Donor Liver Transplantation Background Postoperative respiratory complications are a major cause of mortality following liver transplantation (LT). Noninvasive ventilation (NIV) appears to be effective for respiratory complications in patients undergoing solid organ transplantation; however, mortality has been high in patients who experienced reintubation in spite of NIV therapy. The predictors of reintubation following NIV therapy after LT are not exactly known. Methods Of 511 adult patients who received living-donor LT, data on the 179 who were treated by NIV were retrospectively examined. Results Forty-three (24%) of the 179 patients who received NIV treatment required reintubation. Independent factors associated with reintubation by multivariate logistic regression analysis were controlled preoperative infections (odds ratio [OR] 8.88; 95% confidence interval (CI) 1.64 to 48.11; p = 0.01), ABO-incompatibility (OR 4.49; 95% CI, 1.50 to 13.38; p = 0.007), and presence of postoperative pneumonia at the time of starting NIV (OR 3.28; 95% CI, 1.02 to 11.01; p = 0.04). The reintubated patients had a significant higher rate of postoperative infectious complications and a significantly longer intensive care unit stay than those in whom NIV was successful (p<0.0001). Of the 43 reintubated patients, 22 (51.2%) died during hospitalization following LT vs. 8 (5.9%) of the 136 patients in whom NIV was successful (p<0.0001). Conclusions Because controlled preoperative infection, ABO-incompatibility or pneumonia prior to the start of NIV were independent risk factors for reintubation following NIV, caution should be used in applying NIV in patients with these conditions considering the high rate of mortality in patients requiring reintubation following NIV. Introduction Liver transplantation (LT) has become the mainstay for the treatment of end-stage liver disease, acute liver failure, hepatocellular cancer, and some metabolic liver diseases [1]. Liver transplantation in Japan is highly dependent on living donors because of a severe deficiency in the availability of liver grafts from deceased donors [2]. Noninvasive ventilation (NIV) is an effective treatment for acute respiratory failure in many conditions [10][11][12][13][14]. Two randomized controlled studies showed its effectiveness in patients with acute respiratory failure under immunosuppressed conditions [11,14]. On the other hand, in immunosuppressed patients who failed NIV, the rate of hospital mortality was reported to be very high, ranging from 62% to 100% [11,15,16]. Recent data on patients with hematologic malignancies showed that the reintubation rate with NIV was almost 50% and that the mortality rate following NIV failure amounted to 69-79% [15,17]. The respiratory rate during NIV and a longer delay between admission and its first use of NIV as well as other factors were significantly associated with NIV failure [15]. However, factors related to reintubation following NIV for patients with PRCs after LT have not been documented well. In our hospital, we have successfully begun to apply NIV for respiratory complications in patients with living donor liver transplantation (LDLT) [18][19][20][21]. Although we have subsequently experienced many more cases (over 200 cases) for whom NIV was used following LDLT, reintubation has been necessary in some of these patients. Therefore, to decrease the rate of reintubation following NIV treatment after LDLT and to achieve a better prognosis, we have retrospectively examined patient data to elucidate the factors necessitating reintubation following NIV treatment. We also compared clinical outcomes between patients who did and did not require reintubation after NIV treatment following LT. Patients From August 1999 to July 2008, 532 liver transplant recipients, aged 13 years or over, underwent LDLT at Kyoto University Hospital. Of the 200 patients who subsequently received NIV, we excluded 21 who discontinued NIV therapy because of reoperation (regardless of their respiratory status) and analyzed data on the remaining 179 patients. Fifteen of the 179 patients had infections that could be expected to be successfully treated before LT but that did result in postponement of the LT. These patients received LT after the infections were controlled as evidenced by reduced fever, blood cultures negative for bacteria, and resolution of conditions such as pneumonia, peritonitis, cholangitis, phlegmon, or enterocolitis. After LDLT, all patients entered the Intensive Care Unit (ICU) and required invasive mechanical ventilation before weaning. Extubation was considered under the following conditions: 1) clinically stability; 2) improvement in underlying disease and its complications had improved; 3) minimal ventilator support was Introduction of NIV NIV was considered for all patients who received oxygen therapy or were in case of reintubation and mechanical ventilation and who met at least one of the following criteria to indicate serious PRCs: 1) ratio of the partial pressure of arterial oxygen (PaO 2 ) to the fraction of inspired oxygen (F I O 2 ) (PaO 2 /F I O 2 )#250 while the patient was receiving oxygen therapy; 2) partial pressure of arterial carbon dioxide (PaCO 2 ) $45 Torr; 3) presence of pneumonia while on oxygen therapy; 4) respiratory rate .25 breaths per minute with active contraction of accessory muscles of respiration and/or paradoxical thoraco-abdominal motion; 5) atelectasis of more than one lobe; 6) massive or uncontrolled pleural effusion after percutaneous thoracic drainage, because some of the effusion might be ascites from the abdomen by the pressure gradient [22] ; and 7) other reasons. F I O 2 of oxygen therapy via a nasal cannula, face mask, or reservoir face mask was calculated based on a previously published method [23]. Patients who required urgent intubation due to respiratory arrest, respiratory pauses, severe hepatic coma (above Grade 2), copious tracheal secretion and hemodynamic instability were not started on NIV. Noninvasive Ventilation We used a full-face mask or a nasal mask (Resmed, North Ryde, New South Wales, Australia) for NIV. Ventilation in all patients was by bilevel positive airway pressure (bilevel PAP) devices with oxygen and humidification (VPAP series Resmed) [18][19][20][21]24,25]. After the mask had been secured, the level of support pressure and expiratory positive airway pressure (EPAP) and the amount of oxygen were progressively increased until SaO 2 was .95%, accompanied by decreased respiratory rates and/or reduced activity of accessory muscles for respiration, decreased paradoxical thoraco-abdominal movement, and improvement in respiratory discomfort. When applying NIV, a doctor stayed at the bedside and observed the patient carefully while the SaO 2 and electrocardiogram were monitored. Throughout the first hour, the patient's condition was assessed repeatedly. For minor complica-tions of NIV treatment such as skin rash, eye irritation, discomfort from the mask, air pressure, or gastric insufflations, we decreased the pressure and/or usage time of NIV, used another mask, or inserted a gastric tube. To calculate the F I O 2 during NIV, we used the value from the information supplied by the manufacturer and was attached to the mask. Using this information, the F I O 2 was determined from the following parameters: leakage flow rate per minute from the mask at each pressure and the oxygen flow rate per minute during NIV. If the leakage flow rate at the setting was X and the oxygen flow rate was Y, F I O 2 at the setting was: Discontinuation of NIV Patients for whom NIV could be discontinued because their respiratory status (including chest X-ray abnormality) had improved were assigned to the success group. The reintubated group was comprised of patients for whom NIV was not successful and who underwent reintubation with mechanical ventilation were assigned to the reintubated group. Criteria for reintubation were as follows; failure to maintain SaO 2 of .90% with a F I O 2 $0.6; development of conditions necessitating endotracheal intubation to protect the upper airway (seizure, severe hepatic coma); development of copious tracheal secretions that could not be expectorated; increase in the PaCO 2 accompanied by a pH of #7.30; and severe hemodynamic instability defined as systolic blood pressure ,70 mmHg. Data Collection Pneumonia was defined as new onset of pulmonary infiltrates with clinical symptoms (fever, cough, purulent tracheobronchial secretions, and dyspnea at rest), leukocytosis, and detection of potentially pathogenic bacteria in the sputum or bronchoalveolar lavage culture. Other infectious complications were wound infection, liver abscess, subphrenic abscess, cholangitis, peritonitis, and urinary tract infection. These were confirmed by clinical observation (fever, purulent discharge from wound, abdominal pain), and laboratory markers of inflammation with positive cultures (blood, bile, pus, and urine), and findings from chest Xrays and/or chest computed tomography. The Acute Physiology and Chronic Health Evaluation (APACHE) II score was used to assess the severity of illness at ICU admission [26]. Postoperative laboratory data presented in Table 1 represent values that were obtained on the morning of the introduction of NIV. Arterial blood gases were obtained before the introduction of NIV, and also at the initial assessment after applying NIV (mean time 6 standard deviation (SD) following NIV introduction: 3.964.4 hours). At the initial assessment after NIV, we could not obtain arterial blood gas in 13 of the 179 patients (7.3%). Statistical Analysis Data were analyzed using JMP 9.0 (SAS Institute, Inc. Cary, NC, USA), and values are expressed as mean 6 SD or absolute numbers and percentages in each group. We compared the association between the perioperative factors and the results of NIV (success group or reintubated group). Continuous variables were tested by the unpaired t test or Mann-Whitney U test. Categorical variables were compared using the x 2 test or the Fisher's exact test. A p value ,0.05 was considered to indicate statistical significance. Next, we investigated the associations between perioperative factors and reitubation. Possible predictors of reintubation were tested by univariate and multivariate logistic regression analysis. In the logistic regression analysis for reintubation, variables entered in the multivariate analysis were those yielding a p value ,0.05 by univariate analysis; p values ,0.05 in the multivariate analysis were considered statistically significant. Preoperative and Postoperative Characteristics of the Patients with NIV The preoperative characteristics and operative and postoperative status of the 179 recipients of NIV are summarized in Tables 1 and 2, respectively. The mean model for end-stage liver disease (MELD) score was 24.2611.0 in the 179 patients. Fifteen patients had controlled preoperative infections; 5 pneumonia, 7 spontaneous bacterial peritonitis (SBP), and 1 either cholangitis, phlegmon, or enterocolitis. As mentioned above, these preoperative infections had been controlled before the LT (controlled preoperative infections) ( Table 2). Before NIV treatment, 19 (10.6%) of the 179 patients had been reintubated following the LT for the following reasons: copious amounts of sputum that could not be expectorated, septic shock, pneumonia, tracheal hemorrhage, and respiratory muscle fatigue. NIV was introduced following the second extubation in these 19 patients (Table 1). Outcomes of Patients with NIV Treatment In both the success and reintubation groups, the baseline PaO 2 / F I O 2 values were similar and the PaO 2 /F I O 2 at the initial assessment after NIV therapy was higher in the success group than in the reintubation group but without significance (p = 0.07) ( Table 3). However, a sub-analysis showed that in patients with pneumonia prior to application of NIV, the baseline PaO 2 /F I O 2 was similar between groups (success group: n = 12, 284.46118.2 vs. reintubation group: n = 12, 231.76163.0, p = 0.37), whereas PaO 2 /F I O 2 at the initial assessment after NIV therapy was higher in the success group than in the reintubation group (success group: n = 12, 376.86140.1 vs. reintubation group: n = 12, 263.66129.4, p = 0.04). Although there was no significant between-group differences in the mean changes in PaCO 2 at the initial assessment after the start of NIV therapy (Table 3) Eight (5.9%) of the 136 patients in whom NIV was successful died during hospitalization, while 22 (51.2%) of the 43 patients who failed NIV treatment died (p,0.0001). NIV treatment could not be continued in 16 patients for various reasons. In 7 of the 16 patients, NIV was suspended due to complications (6 severe abdominal distension despite a nasal gastric tube; 1 concomitant ileus). Nine patients could not tolerate NIV, and the prevalence of those who could not tolerate NIV was significantly higher in reintubation group than in NIV success group (Table 3). Among the 16 patients in whom NIV discontinued, 7 were eventually reintubated and 5 of those 7 died. The survival curve shows that patients in the reintubation group had a significantly poorer prognosis than those in the NIV success group (p = 0.0009) (Figure 1). Also, patients who failed NIV had significant longer ICU stays (19.7615.7 days vs. 5.966.4, p,0.0001). Discussion Although the data were retrospective, this study of a large NIV series in one hospital showed the following significant factors that were predictive of reintubation following NIV: controlled preoperative infections, ABO blood type incompatibility, and postoperative pneumonia prior to placement on NIV. After excluding from the analysis the 19 patients who were reintubated following LDLT and were provided NIV after that second extubation, those three factors remained significant. In addition, after the start of NIV treatment in pneumonia patients, there was a significant difference not in the initial PaO 2 /F I O 2 but in the initial assessments of PaO 2 /F I O 2 between the reintubation and success groups. Patients who are waiting for transplantation are usually have severely ill and are sometimes immunocompromised. Therefore, infections develop easily, which often postpones the transplantation. In this study, 15 (8.4%) of the 179 patients who had been administered NIV had an infection that had been controlled before LT. Although the LT team considered that the preoperative infection had been well controlled, this factor was revealed to be one of 3 factors predictive of reintubation following NIV. Since the number of patients with a preoperative infection (n = 15) was small, it was difficult to make firm conclusions as to the role of preoperative infections in the failure of NIV. In addition to control of preoperative infections as stringent as possible by LDLT teams, the future study of this issue in a greater number of patients should be done. Prognosis of ABO-incompatible LT in adults has been reported to be inferior to compatible LT because of rejection, and especially there has been a high incidence of acute bile duct and vascular complications [27]. Although it is difficult to determine the contributions of ABO-incompatible LT to reintubation following NIV, results of this study suggest that patients with NIV following ABO-incompatible LT should be cautiously observed for PRCs, which might be influenced by several complications due to an ABO-incompatible LT. A report from hematological parts showed that NIV treatment for respiratory failure with acute lung injury (ALI) or ARDS had a high mortality rate [17]. In the present report, the number of patients with ALI/ARDS was small and pneumonia prior to NIV treatment following LT was a significant risk factor for reintubation. In the success group and reintubated patients with pneumonia prior to NIV, the baseline PaO 2 /F I O 2 was similar, whereas PaO 2 /F I O 2 at the initial assessment after NIV therapy was higher in the success group than in the reintubation group. Pneumonia was already identified as a risk factor for NIV failure [28]. Therefore, we propose that if the PaO 2 /F I O 2 does not improve in patients with pneumonia after application of NIV, reintubation should be performed early. However in this study, PaO 2 /F I O 2 was not true but the calculated ones from the formula [20]. Therefore, the findings on PaO 2 /F I O 2 in pneumonia patients in this study were not conclusive and this issue requires further study. Our inclusion in the analysis of the 19 patients who were reintubated following LDLT and were then provided NIV after their second extubation might be questioned as those patients could be considered to represent a separate group of NIV posttransplant recipients. However, we included these patients in the overall analysis because we wanted to provide information for clinicians on all of our patients who received NIV treatment following LDLT. To determine if risk factors for reintubation following LDLT differed among these 19 patients from those in the overall study group, we performed a separate analysis and found that the same three risk factors existed for these patients as for the overall study group. In our study, 16 patients were NIV intolerant. Seven of these patients were eventually reintubated and 5 of the 7 died. Ambrosino et al. reported that intolerance to NIV treatment was associated with NIV failure [29]. Thus, new equipment such new types of masks [30,31] or new types of machines that alleviate the discomfort due to ventilator-related pressure or flow might be helpful to decrease the rate of intolerance to NIV. Recently, the first review of NIV in adult liver transplantation was published [32]. Although this report was comprehensive regarding the usefulness of NIV during the perioperative LDLT stage, risk factors for reintubation following NIV treatment were not discussed, which is the topic of the current report. Previously, although we addressed the general effectiveness of NIV in both adult and infant patients [18][19][20][21]24,25], this report provides more specific information on the topic than those reports or the recent review. This retrospective study was done in an institution with extensive experience in the use of NIV [2], and the relatively high rate of use of NIV treatment could be explained by the high mean MELD scores (24.2) in the 179 patients whose data were analyzed. Although these study patients already had relatively severe morbidity before they underwent LDLT, the rate of NIV success was 76.0%. This rate was higher or equivalent to that in immunosuppressed patients in previous reports [10,13,14]. In our institution, we set relatively mild inclusion criteria for introducing NIV, and started NIV early, partly because noninfectious respiratory complications in the patients following liver transplantation were reported as the independent risk factors of pneumonia [5] and respiratory failure might deteriorate rapidly in immunosuppressed patients after LDLT. The early introduction of NIV might result in the higher the rate of NIV success and lower hospital mortality rate. This study had several limitations. Firstly, was the retrospective design. However, the large number of cases included in our analysis was probably sufficient to minimize this limitation. Secondly, based on several factors present in the perioperative stage, patients in the reintubated group had a more serious condition than those in the NIV success group. Therefore, success or failure might be dependent on the patients' condition before NIV treatment. It is difficult to know how these conditions before NIV treatment influenced the success or failure of the NIV treatment. These complicated backgrounds might have caused the R square in the multivariate forward logistic analysis for reintubation to be comparatively low (Table 5). However, it is important to manage NIV treatment so that success is achieved without its overuse. In conclusion, we demonstrated that controlled preoperative infections, ABO blood type incompatibility, and postoperative pneumonia prior to the start of NIV were early predictors of NIV failure. We also revealed that in patients with postoperative pneumonia being administered NIV, PaO 2 /F I O 2 at the initial assessment after NIV therapy was higher in the success group than in the reintubation group. We propose that if patients with a preoperative infection, ABO-incompatibility or post-operative pneumonia receiving NIV do not show improvement in the PaO 2 /F I O 2 after NIV, early endotracheal intubation with mechanical ventilation should be considered as an alternative therapy.
2018-04-03T00:28:28.513Z
2013-12-05T00:00:00.000
{ "year": 2013, "sha1": "bd4c81463d2efd5d9314850945045436f589b60b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0081417&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd4c81463d2efd5d9314850945045436f589b60b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221090468
pes2o/s2orc
v3-fos-license
More Effective Software Repository Mining Background: Data mining and analyzing of public Git software repositories is a growing research field. The tools used for studies that investigate a single project or a group of projects have been refined, but it is not clear whether the results obtained on such ``convenience samples'' generalize. Aims: This paper aims to elucidate the difficulties faced by researchers who would like to ascertain the generalizability of their findings by introducing an interface that addresses the issues with obtaining representative samples. Results: To do that we explore how to exploit the World of Code system to make software repository sampling and analysis much more accessible. Specifically, we present a resource for Mining Software Repository researchers that is intended to simplify data sampling and retrieval workflow and, through that, increase the validity and completeness of data. Conclusions: This system has the potential to provide researchers a resource that greatly eases the difficulty of data retrieval and addresses many of the currently standing issues with data sampling. INTRODUCTION When developing so ware, it is a common occurrence for developers to rely on the Git version control system 1 . is is done primarily to make collaboration between developers easier and because it provides a fail-safe against catastrophic errors. is reliance on Git means that the so ware development process generates a lot of publicly available data. is data could be immensely useful to Mining So ware Repository (MSR) researchers in facilitating creation of tools to support developers. MSR researchers have realized this potential and the field of research has experienced rapid growth in recent years. However, due to the vastness of the field, there are a lot of difficulties faced when first retrieving data. 1 h ps://git-scm.com/ Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permi ed. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Due to the immensity of the Git ecosystem, one's ability to do any form of analysis o en requires sampling of the data. However, if researchers are not careful when sampling, the data could easily be flawed. Due to the vastness of the dataset, it is easy to retrieve bad data or heterogeneous data [6]. is process is made even more difficult for MSR researchers focused on Git data because the retrieval process requires using difficult APIs or manual crawling of Git repositories. Many MSR papers currently use these methods to perform data retrieval and the difficulties with the retrieval process is well documented [2], [4]. Alongside this, many MSR papers perform their sampling by manually selecting one or more projects. As seen in [7], [5], [8] the project selection can be based on a list of different criteria. [5] a empts to separate which projects to analyze by randomly selecting a project from a list of 20 randomly retrieved Java repositories hosted on GitHub that contain a Maven project file (pom.xml). On the other hand, [7] specifically selects five large scale repositories (OpenStack, LibreOffice, AOSP, QT, and Eclipse) that are integrated with Gerri and Git. Similarly, there are many ways of performing the data extraction and o en they do not follow a set process. As noted in [7], the process can change even within one's own workflow since the process of scraping one repository is not always applicable to another. is lack of conformity in sampling and data extraction practices between researchers provides a potential for bias and errors in data retrieval. Many of the publicly available repositories are hosted on GitHub 2 (a popular code hosting and version control platform). As noted above, in order to select a sample of projects for analysis MSR researchers typically filter by the metadata available on GitHub. GitHub categorizes these projects based on metadata such as stars (a measure for users to keep track of repositories they like) or languages used. Currently, GitHub is the only well known system for sampling of projects and it is quite o en the first choice of MSR researchers. For example, researchers might sample projects with more than three stars and/or further refine the search by using the language a ribute, the project description, project creation data, or other metadata provided by GitHub API. Unfortunately, this process is time consuming, error prone (sometimes metadata is absent or incorrectly specified), and it is only possible to sample projects based on the very limited set of a ributes provided by GitHub API. For example, there is no way to retrieve projects that contain a certain number of source code files from a specific language or to determine all projects that a specific developer has worked on. e World of Code (WoC) [3] aims to provide an interface that simplifies the data retrieval process by allowing mass retrieval of git data that resides across all open source repositories such as commits, files, authors, projects, and blobs. WoC allows for rapid crossdata retrieval due to a set of key-value mappings between data types (e.g. commit to author of the commit). More information on the initial implementation of this interface can be found in [3]. Such capabilities are particularly suited to support representative sampling and, more generally, research on so ware ecosystems. In this paper we describe how we used WoC to support sampling of repositories based on specific criteria that are not available via, for example, GitHub API (e.g. number of commits, number of committers, lines of code, languages used, age of the repository, library dependencies). Specifically, we compiled databases to provide these sampling capabilities based on activities in git repositories or by authors making commits to all public git repositories. e compiled datasets are stored in a MongoDB collection to make extraction and analysis easier. For MSR researchers interested in analyzing a single project, WoC can be used for data retrieval beyond the scope of the project. Due to the key-value mappings between Git data types, it is possible to easily retrieve information about developer activities outside of the project being analyzed. CREATION OF THE DATABASES FOR SAMPLING Using the World of Code, a set of publicly available datasets was created to allow for more exact sampling of git project/author data. To make data retrieval less challenging for users not accustomed to the WoC system, the datasets are stored in MongoDB collections labeled as metadata. While these datasets contain useful information, they are not an exhaustive collection of all the possible data that can be derived from Git. e World of Code is a system that stores Git data in an easily accessed form. Alongside this, the system is not restricted solely to one source code hosting site. Instead, it saves data from the greater git ecosystem. Both of these metadata datasets were produced by iterating over the entirety of the projects/authors contained within the World of Code. Utilizing the base-mappings between datatypes, it was possible to retrieve and store all the desired data for these collections. However, considering the size of data contained with WoC, the process of creating these datasets takes about 24 hours for authors and roughly 48 hours for projects. e types of MSR analyses that could benefit from better sampling To determine if the World of Code provides a system unique in its ability to be used for certain MSR research, we considered a set of usage examples. Furthermore, we recruited 3 external researchers to perform/provide research tasks. In the development process, a potentially large factor for developers when considering how to market their so ware is coding language popularity ( e.g. which language should the product be implemented in, which language should be provided for others to interface with a component, etc). One way to determine popularity using the World of Code is by plo ing language usage over time. By assessing the file extensions of each file it is possible to determine which files are related to which coding languages. Afterwards, using the mapping from files to commits, the timestamp related to each commit can be used to determine usage per year. Another avenue of research the World of Code offers is analyzing developer ecosystems. When analyzing the developers in git, it is quite troublesome to retrieve accurate and specific data of developers names. Names or emails of developers are frequently misspelled, incomplete, or completely missing. is makes performing any research on developer networks quite difficult. However, the World of Code provides some options for enhancing accuracy measures when a empting to disambiguate author identities. To perform any form of disambiguation algorithm it is necessary to determine common pa erns of data irregularity. Fortunately, the World of Code contains a near complete collection of author ids (e.g. John Doe JD@domain.com) within Git. is makes the dataset much more viable for studying and removing such irregularities. Furthermore, using WoC it is possible to even further disambiguate based on other factors of similarity. Such similarity measures include time pa erns of commits, writing style within commits, and author id comparison within file changes. Another form of research that WoC can help perform is analysis of sustainability of open source ecosystems. By analyzing the existing projects within a particular ecosystem, for example Python's PyPI packages, it is possible to determine how o en these projects achieve "feature completeness" (achieved intended usage and requires no further maintenance) versus how o en projects are abandoned. is can be done once again using the file to commit mapping and by assessing the imports within each file. Other research areas include analysis of developers across project ecosystems, file cloning across ecosystems, repository filtering, the popularity and relationship between NPM packages, and investigating what influences adopting of so ware. To further elucidate how Git data retrieval can be tedious, consider a hypothetical researcher Sam who wants to start repository analysis. Sam wants to analyze the frequency of changes to coding language files in projects. Considering that the average number of lines of code in C files is much greater than in Python files, Sam wants to determine whether C files are changed more o en than Python files. To proceed with this research Sam needs to have access to all changes made to previous files over a timeframe in order to be able to determine which (if any) coding language experiences more frequent changes. Sam's sampling and data retrieval workflow is illustrated in Figure 2. A Use Case for project/author sampling To achieve this goal, Sam wants to sample 20 disparate projects that contain at least 20 Python files each and a minimum of 5 commi ing authors. Sam also wants 20 projects that contains at least 20 C files and at least 5 commi ing authors. ey also want these files to contain at least 50 lines of code for the Python files and over 100 lines for the C files. Sam has used GitHub in the past and believes that it must host many such projects. us, they want to use it when performing the project sampling. ey do a quick search and finds that they can get a list of all currently trending C/Python projects. However, despite having a list of projects, they have no way to know which of these projects satisfy all of the requirements without directly assessing each project. Sam hopes to avoid such a time consuming process and wants to automate some of the work. us, they do another quick search and realizes that GitHub has a way to further restrict the resulting repositories based on the project metadata GitHub collects. Unfortunately, they quickly realize that some of the restrictions they need have not been implemented. Sam has no way to specifically restrict the results by the number of coding language files within each project. Furthermore, there does not seem to be a way to restrict the files to a number of lines. us, they need to decide how else they would like to restrict the search or choose to manually search through each project. To make their work more expedient, they decide to modify the search field slightly by simply choosing projects that have a large number of stars and fall into either the C or Python category. is will hopefully satisfy the requirements regarding the number of files and commi ing authors. Unfortunately, the workload of manually looking for 40 projects is still too time consuming. us, Sam decides to use the first 10 of each language found. A er determining the projects Sam would like to analyze, they have to determine how to go about retrieving the desired data. ey can either decide to manually scrape the data from each project's webpage or look for a system in place that will do it for them. ey do a quick search and find that there are publicly available API that will allow them to interface with the GitHub project data. However, all of the API will take time to learn and it is hard to determine if the system will have all the desired data. us, Sam must decide whether to learn how to use one of these systems or implement a temporary system to personally scrape the data. To avoid potentially retrieving bad data by performing manual scraping, they decide to learn one of the available systems and a empt to retrieve the information. Now Sam must determine how to go about assessing the rate of change of each file. ey know that they can look at different commits in Git and determine if there were changes made. us, they choose an API that allows them to retrieve that information and a er a li le time and effort they finally retrieve the desired data. Had Sam known about the World of Code, they could have saved much time and effort. Sam's hypothetical workflow using the World of Code is illustrated in Figure 3. Using the World of Code they would be able to perform their project sampling in the exact way they originally desired and could have performed their analysis/data retrieval while performing the sampling. Since the World of Code has access to the files, commi ing authors, and lines of code in each project, they would have been able to perform all of their work quickly and within one system. Since the World of Code contains basemappings between data-types, performing the sampling is a simple ma er. is hypothetical process could have been solved in its entirety using the World of Code. Had Sam used the World of Code, Sam would have needed to retrieve all the files associated with a project. en, to perform a count of the specific language files contained within the project, they could have simply assessed the file extensions of each file. Once a project with the desired number of files had been found, they could have retrieved the number of committing authors of that project by utilizing the direct mapping between projects and authors. is basemapping would return a list of authors and whether there were 5 or more authors in the project would be easily discerned. Finally, they could determine whether the coding files contain the desired number of lines by using the mapping between projects and blobs. Blobs in WoC contain the actual contents of each file. us, counting the lines of the blobs can discern if the project satisfies the set requirements. Furthermore, once a project had been found that satisfies the requirements, it is possible to directly perform the analysis. Since the World of Code contains all the commits associated with each project and every blob is linked to a commit, it is possible to determine how o en one file has been changed. New blobs are not created unless there were changes made to the file. us, counting the blobs of each of the language files will quickly retrieve the desired information, and would have solved Sam's problem. Not only is the workflow simpler to figure out when using the World of Code, but it also only requires the user understand one system. Further, had Sam desired to research individual commi ing authors within Git, the workflow without the World of Code would have been quite a bit more difficult. Currently, there is no known system that allows for sampling of authors and thus the research would have required personal implementation for any automation of the process. To make ma ers even more difficult, many of the publicly available API lack the requisite data regarding authors to perform in depth research. is would have made their workflow quite a bit more difficult and almost certainly would have forced them to manually implement any scraping of data. However, because the World of Code contains basemappings for authors similar to projects, performing a sampling of specific authors based on certain restrictions is still easily performed. Additionally, the analysis can also be done within WoC when researching authors. Considering much of these examples are focused around making researchers' workflow easier, it is worth noting that much of the above stated work is possible using the metadata datasets compiled using the World of Code as well. While it is not possible to determine lines of code using the metadata datasets, performing the sampling based on number of language files and authors within the projects is easily accomplished using MongoDBs builtin restrictions. is would allow researchers to fully avoid having a firm grasp on Perl, Python, or the Unix Command Line which is necessary to perform the sampling using the World of Code. en the later data retrieval/analysis process could be performed using their preferred API. Project Metadata Extracted from Code Commits e project dataset is stored in a MongoDB collection titled proj metadata followed by the current version of WoC (e.g. proj metadata.Q). e collection stores the total number of authors, commits, and files associated with the project. It also stores an activity range for each project based on the Unix timestamp linked to the first and last commit. Alongside this, the data includes coding language usage based on the file extensions of each file in the project. Due to the prevalence of forked projects in Git, if WoC determines a project to be a fork then the original location of the forked project is included. When the project is hosted on GitHub and has a stars rating, the collection includes information on the stars rating of the project. Author Metadata Like the project dataset, the author dataset is stored in a MongoDB collection titled auth metadata followed by the current version of WoC (e.g auth metadata.Q). It includes the total number of commits, blobs, files, and projects the author has participated in. It includes a time frame the author was active based on the Unix timestamps of the first and last commit. Lastly, it includes coding language usage based on file extension. CHALLENGES AND LIMITATIONS OF THE WORLD OF CODE World of Code Versioning: Since the World of Code contains so much information, mass-updates of information must be done in increments. Due to this fact, updating the version of WoC o en happens months apart. is is necessary because updating the basemaps requires computationally intensive work that takes a non-trivial amount of time. is versioning of WoC means the data is subject to the latest WoC update. us, information on currently active repositories/authors is also restricted by this update system. Timeframe analysis based upon the first and last commit time is therefore not suitable when the data must be truly current. Language Inclusion: ere are restrictions to the languages included in the metadata datasets. e languages included are Ada, C/C++, COBOL, CSharp, Erlang, Fml, Fortran, Go, Java, Javascript, JL, Lisp, Lua, Perl, PHP, Python, R, Ruby, Rust, Scala, SQL, and Swi . When counting the files, languages were determined using the file extensions on the filenames. If the filename does not have one of the extensions for these languages it is not counted as a program file. Alongside this restriction, languages that do not require a specific file extension will be ignored by the algorithm and counted as a regular file. To analyze languages not included in this set, researchers may need to generate a personal dataset that includes the language, or request the language be included in the next iteration. Forks: ere are limitations to the fork information provided in the metadata collections. is information is based on an arbitrary clustering method developed for determining forks. is was done partly using commits that are associated with many projects. Since there is a timestamp for each commit, it is theoretically possible to find the earliest commit in a project, then see what project that timestamp is associated with, and then claim the current project was forked from that project. However, if such a process is followed, the total number of "un-forked" projects becomes much smaller than is reasonable. is is because many projects have dependencies on widely used projects. us, the clustering method that was used had to make determinations on whether the project truly is a fork or not. RELATED WORK Retrieving Git data can be a convoluted process and, as is discussed in [7], the use of domain specific API can be difficult. [7] describes a framework meant to make mining of code review repositories on Gerrit easier. is framework retrieves code review information from publicly available repositories and stores it in a more easily accessed format. Desired information is compiled by scripts that query the Gerrit API and then parse the returned JSON object into a more easily used format. e parsed data is then stored in a relational database so that interested parties can access the data more easily. Similar to the data compiled by the World of Code, this data is intended for researchers interested in repository analysis. However, unlike WoC, their collected data is targeted at Code Review repositories and thus is not applicable to general repository analysis. A common practice to expand the research area is to analyze data pulled from general Git projects. Due to its dominant presence as a source code host, GitHub tends to be one of the primary targets for data retrieval and analysis. ere is a publicly available API for GitHub that leverages the REST API to return JSON objects with the requested information. However, as was discussed in [1], using the GitHub API can be difficult, is restricted to specific fields of research, and may lead to biased results if used incorrectly. e API's limitations include restrictions on the total number of requests that can be made per hour and the difficulties of parsing the results. On top of the restrictions already in place from the API, GitHub also only provides a subset of the total Git ecosystem. Despite its prevalence, GitHub only makes up a fraction of the total number of Git repositories. us, the data collected there fails to include many projects. Also presented in [1], GHTorrent, a project meant to mirror the GitHub event timeline and store the raw JSON for retrieval, is another framework that is o en used for repository analysis. e system has been picked up by many researchers because it removes the restriction on number of requests per hour and also a empts to provide the results in an already parsed form. is data is stored in a relational database and a set of MongoDB collections meant for querying. However, because it is based strictly off the GitHub ecosystem, it still has an issue with restricting the data to GitHub alone. It also introduces a set of new issues which are discussed in [1]. CONCLUSIONS In this paper, we presented the World of Code and the metadata datasets that were compiled within. Further, we outlined the limitations of WoC and the metadata in regards to data integrity. is system has potential to become an excellent sampling resource for researchers interested in MSR and for data analysis of Git projects and Git authors. e World of Code has potential for much easier Git data retrieval and could be used to drastically simplify the MSR research workflow. Furthermore, the system can be used to create similar metadata datasets in the future to make sampling of data for researchers much simpler.
2020-08-11T01:00:21.125Z
2020-08-08T00:00:00.000
{ "year": 2020, "sha1": "0f5ec9fd111c5ca54d27ae316944024a3b0bcea1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cbc826630091f9c0a06713f34a5bc5fcd4f7d13d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
92547486
pes2o/s2orc
v3-fos-license
A research on the determination of productivity levels of tomato grown areas The research was conducted in tomato-growing lands of Lâpseki, Ezine, Bayramiç and Central districts of Çanakkale province, Turkey. The aim of the study is to check the suitability of the field for tomato farming and to produce a solution if there is a problem. Disturbed soil samples were taken from 114 points with certain coordinates, at a depth of 0 to 30 cm, and analyses were performed. In the soil samples, texture, soil reaction (pH; 1:2.5), calcium carbonate (CaCO3%), phosphorus (P; kg.ha -1 ), cation exchange capacity (CEC; meq.100 g -1 ), iron (Fe; ppm), manganese (Mn; ppm), zinc (Zn; ppm), copper (Cu; ppm) and clay (%) analyses were conducted, and characteristic maps of the region were prepared according to the results of the analyses. Based on these results, the present condition and suitability of the soils were evaluated, and simple statistics along with correlations of the analyzed parameters were examined. For the problems of the area, in low pH areas, it was deemed necessary to apply calcium carbonate (CaCO3) or calcium hydroxide [Ca(OH)2] together with physiological alkaline fertilizers. As per the high pH areas, it was necessary to apply elemental sulfur together with physiological acid fertilizers. It was also concluded that Zn application was necessary for the 43.85% of the area with Zn deficiency. INTRODUCTION There is a big question about whether conventional farming practices can provide food for a world population expected to exceed 7.4 billion by 2020 (Pendey and Chandra, 2013).For this reason, it has become a necessity to increase agricultural production.The agricultural production consists of animal and plant production, while plant production is made up of fruits, vegetables, grains and industrial plants.Vegetables, particularly tomatoes, have a great significance in human nutrition and health.Tomato (Solanum lycopersicum) is an annual plant, which grows 1-3 m tall, among the Solanaceae family, native to central, south, and north America ranging from Mexico to Peru (Guntekin et al., 2009).Considering the global plant production, tomato is the third most consumed and popular vegetable following potato and sweet potato (FAOSTAT, 2018). A total of 12750 tons of tomatoes, about 8750 tons of table tomatoes and 4000 tons of paste tomatoes, were produced in 2017 in Çanakkale, which covered 2.56% of the vegetable fields of Turkey (TSI, 2017).In addition, E-mail: demirert@gmail.com.Tel: +90 236 654 12 01. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License among the vegetables, tomatoes have taken the first place with 12,750,000 tons of production in 2017 year (TSI, 2017).Therefore, tomato is the most important vegetable of the research area (TSI, 2017).Popularity of tomato depends on its chemical content as 93-95% of a tomato is composed of water, and 5-7% composed of inorganic compounds, organic acids (citric and malic acid), alcohol-insoluble proteins, cellulose, pectin, polysaccharides, carotenoids and lipids (Petro-Turza, 1987).It is an important source for human nutrition since it contains potassium, organic acids, and vitamins A and C at high levels (Moreno et al., 2008). Efficiency is required for a qualified agriculture and quality products.This is only possible with proper fertilization together with other applications.In the ideal soils for tomatoes; pH: 6.0 -6.5; texture: composed of combination of sand-loam or sand-loam-clay; lime: <5%; CEC: 15-20 meq.100 g -1 ; P: > 90 kg.ha -1 ; exchangeable Zn: 1-2 ppm; Fe: 2.5-4.5 ppm; Mn: < 10 ppm; Cu: > 0.2 ppm and the clay should be <35% (Kacar, 2012).On the other hand, soil fertility varies in different places (Mandal et al., 2015).Therefore, nutrients and microorganisms in the soil play an important role in improving soil quality (Sun et al., 2011).Farmers may excessively use inorganic and organic fertilizers and pesticides in order to harvest good yield.Particularly, the continuous use of chemical fertilizers increases the concentration of heavy metal in the soil (Arya and Roy, 2011). The aim was to ensure controlled chemicals needed to protect the environment and to grow quality products.Determining the character of the soil is the first step in this process.Therefore, this research study was carried out to determine the soil character of the study area, and to suggest a solution if there was a problem. METHODOLOGY The research was conducted in Lâpseki, Ezine, Bayramiç and central districts of Çanakkale province.Çanakkale is a neighbor to Edirne and Tekirdağ provinces on the European side of Turkey, while it only neighbors Balıkesir on the Anatolian side.The city is located between longitudes 25°40'-27°30' East and latitudes 39°27'-40° 45' North (Figure 1).The large part of its territory is on the Anatolian side and its coastal length is 671 km (TSI, 2017).Mediterranean climate largely prevails in Çanakkale.However, because it is located in the north-west, it is colder in the winter compared to the Mediterranean climate.The lowest temperature falls to 6.4°C in February, while the highest temperature is about 41.7°C in August.Çanakkale has an average annual temperature of 15.2°C and an average humidity of 72.6%.There are more winds in Çanakkale than its neighboring provinces.In the winter season, there are very little snow falls and even if it snows, it stands on the ground up to one week.Rainfalls mostly occur during December, November, January and February (TSI, 2017).Climate is also very suitable for vegetable farming.Çanakkale province has a total area of 9933 km 2 , 55% of which is comprised of forests.The remaining land consists of arable lands, meadows, and pastures.Just like in the climate, the vegetation is Mediterranean vegetation (TSI, 2017). In this research, the coordinates of 114 locations to be studied were initially identified on the maps (1/100000) of the region.Locations of the identified points were found by GPS and marked.Mixed soil samples were taken from the 114 points at a depth of 0 to 30 cm (Kacar, 2012). After gathering, soil samples were sent to the laboratory for then air-drying.Stones, plants and animal remains were picked out.These samples were then milled and sieved with a 10-mesh sieve (Kacar, 2012).Subsequently, they were analyzed.In the soil samples, texture and clay percentage were detected by the hydrometer method (Bouyoucos, 1962); soil pH was determined by using 1:2.5 soil-water suspension method (Jackson, 1973); % CaCO3 was obtained by the Scheibler Calcimeter (Kacar, 2012); available phosphorous by Olsen et al. (1954) method and DTPA extractable Zn, Fe, Mn and Cu were determined with the standard method given by Lindsay and Norvell (1978).The results of the analysis of soil samples belonging to the research area were given collectively in Table 1.In addition, the productivity maps and graphs of the research area were separately drawn according to the results (Figures 2 and 3).Correlations between the obtained parameter values (Table 2) and descriptive statistics (Table 3) were investigated with MSTAT statistic program (Akdemir et al., 1994). RESULTS AND DISCUSSION Four different soil textures were identified in the research area (Table 1 and Figure 2).Of the soil, 42.9% was identified as sandy, 29.8% as loamy-sand, 11.6% as sandy-loam and 11.6% as sandy-clay-loam (Bouyoucos, 1962).According to the analyses, 84.3% of the area is composed of sand and sand-loam mixture (Table 1 and Figure 2).Texture, which does not easily change, is an important physical property that affects the land character the most.This property is directly related to water, air and heat, and it significantly affects the nutrient reserve (Brady and Weil, 2008).A texture consisting of sandloam or sand-loam-clay combinations is suitable for vegetable agriculture; thus, there is no problem for tomato in this respect (Güneş et al., 2013). In the research, the pH value was determined to be varying between 4.6 and 8.1, and the average was found to be 6.7 (Table 2).The pH value was determined lower than 5.5 for 12.2%, higher than 6.5 for 58.7% and between 5.5 and 6.5 for 29.1% (Table 1, Figures 2 and 3).In this range, there would be no problem in retrieving macro and micro elements.Soil pH is one of the most important factors in the relationship between soil chemistry and nutrients, and in the intake of elements (Güneş et al., 2013). The ideal soil pH should be between 6.0-6.5.If pH is higher than 6.5, the plant's intake of metallic micro nutrients (Fe, Zn, Mn, Cu) and boron (B) becomes more difficult and it decreases.However, if the pH is lower than 5.5, the phosphorus (P) and molybdenum (Mo) cannot be taken by the plant (Kacar and Katkat, 2010).When Table 3 was examined, a statistically insignificant negative correlation was observed between pH and Fe, Mn, and P. In the areas with the pH above 6.5, 1000-2000 kg ha -1 elemental sülfür should be used, and the fertilizers should be chosen in physiological acidic character.On the other hand, in areas with the pH value below 6.5, 1000-2000 kg ha -1 CaCO 3 or Ca(OH) 2 and fertilizers in physiological alkaline character should be used (Kacar and Katkat, 2010).When Table 3 was examined, a statistically insignificant negative correlation was observed between pH with Fe, Mn, and P. In the areas with the pH value above 6.5, 1000-2000 kg ha -1 elemental sülfür and the fertilizers in physiological acidic character should be used.On the other hand, in areas with the pH value below 6.5, 1000-2000 kg ha -1 CaCO 3 or Ca(OH) 2 and fertilizers in physiological alkaline character should be used (Kacar and Katkat, 2010). The lime [Calcium Carbonate (CaCO 3 %)] in the study, ranged from 0.1 to 41.8%, while its average was detected as 10.95% (Table 2).In an ideal soil, the lime content should not exceed 5% (Brady and Weil, 2008;Kacar and Katkat, 2010).However, there is no problem at the level of lime up to 15%.In 33.33% of the study area, lime has exceeded 15% (Table 1, Figures 2 and 3).Except for the 33.33%, the research area does not have a problem concerning the lime under the conditions that correct feeding and pH control is conducted.Phosphorus is bound at 33.33% of the research area.In addition, Zn, Fe and Mn are taken at low levels (Güneş et al., 2013).The negative relationship between lime and P, Fe, Mn, and Cu (Table 2) can be explained by the high lime content of the soils in the region (Güneş et al., 2013). Moreover, a proportional formation was observed between lime and pH.It was suggested to use sulfur and organic acid for problematic soils (for 33.33%) in the research area (Güneş et al., 2013). Phosphorus (P) content of soil samples ranges from 105.0 to 2147.0 kg.ha -1 , with an average of 416.8 kg ha -1 (Table 2, Figures 2 and 3).According to Kacar (2012), phosphorus level determined in the research area was found to be sufficient (P ≥ 90 kg ha -1 ) (Güneş et al., 2013).This is because of the suppression of the lime and pH factor, which inhibits phosphorus intake (Güneş et al., 2013).It can be explained by the accumulation of dicalcium phosphate or tricalcium phosphate with the repetitive application of phosphorus in each production year (Kacar and Katkat, 2010;Güneş et al., 2013).In addition, although not statistically significant, a negative correlation was detected between CEC and Fe, Mn, P, a positive low correlation between CEC and Zn, Cu, and a high correlation between CEC and clay (Table 3).It is necessary to increase the solubility of the Phosphorus.For this purpose, sülfür, leonardite, organic acids or (Kacar and Katkat, 2010).Iron (Fe) varied from 1-27 ppm in the research area, with an average of 8.239 ppm(Table 2, Figures 2 and 3).It was determined to be at low levels (Fe ≤ 2.5) in 28.9% of the samples, at adequate levels (2.5-4.5) in 28.9%, and at high levels (Fe ˃4.5) in 42.2% (Table 1) (Eyupoglu et al., 1996).The usefulness of iron in calcareous soils is reduced by the concentration of HCO 3 -1 (Bloom and Inskeep, 1988).In addition, the effect of high pH is more conspicuous.Due to high pH (pH>6.5),Fe cannot be received at 58% of the soils (Table 1) (Kalbasi et al., 1988;Kacar and Katkat, 2010).In 33.33% of the soils, in which lime (CaCO 3 ˃ 15%) is high, it will not be possible to intake the iron.In return, the tomato highly reacts to iron deficiency.Therefore, iron deficiency should be observed in those areas, and fertilization should be done through the leaf.On the other hand, although statistically not significant, there exists a negative relationship between Fe and Zn, as well as % clay content (Table 3).The solution is to lower the pH level (Kacar, 2012;Güneş et al., 2013). According to the results, Mn level is considered sufficient (Martens and Westermann, 1991).31.57% of the research soil is calcareous alkaline, and 30.70% is sandy-acidic.However, in the limealkaline soil (pH˃7; CaCO 3 ˃ 15%) Mn is difficult to absorb, because the formed manganese oxide (MnO) and manganese hydroxides [Mn(OH) 2 ] prevent absorption (McKenzie, 1989).In sandy acidic soil, Mn undergoes a washing process due to the lack of bonding surface despite high solubility, and it cannot be taken at sufficient levels.Therefore, there may be Mn deficiency in plants grown in sandy-acidic soils and in calcareous-alkaline soils (Kacar and Katkat, 2010).Furthermore, high phosphorus has a negative effect on Mn intake and its transport in plants (Taban et al., 1995;Kacar and Katkat, 2010).As a result, in 62.70% of the soils of the research area, high phosphorus (P), pH and lime conditions should be taken into account and the pH must be adjusted (Karaman et al., 2012). Zinc (Zn) varied from 0.2 to 2.8 ppm in the research area, with an average value of 0.9 ppm (Table 2, Figures 2 and 3).There was no statistically significant relationship between Zn and other parameters (Table 3).According to these values, it was determined to be at sufficient and high levels in 23.68% of research area soils, and at low and very low levels in 76.32% of the soils (Table 1; Figures 2 and 3) (Kaplan et al., 1997;Kacar and Katkat, 2010;Karaman et al., 2012).Marschner (1991) stated that the amount of exchangeable zinc varied between 0.1 and 2.0 ppm depending on soil properties (Hacısalihooğlu et al., 2004). This information confirms the results of the research.There is a difference between plants in terms of zinc intake.For example, tomatoes receive only 30% of the given zinc.Due to its being at low soil temperature, high pH and high phosphorus contents also reduce Zn intake (Hacısalihooğlu et al., 2004).As soil pH increases, variable Zn decreases (Kacar and Katkat, 2010).The information provided confirms the research findings.Therefore, while applying Zn in the research area, phosphorus and pH must be taken into consideration, and the pH must be absolutely calibrated (Güneş et al., 2013). Zinc should be given as needed.In fact, it should be applied through the leaf, especially in areas where pH is high.Because of the high pH and high calcareous conditions; its solubility decreases and it cannot be taken by forming compounds such as zinc carbonate (ZnCO 3 ) and zinc hydroxide [Zn(OH) 2 ] with carbonates (Karaman et al., 2012). Copper (Cu) was varied between 0.1 and 2.6 ppm in the samples, and the mean value was determined to be 1.06 ppm (Table 2, Figures 2 1) (Eyupoğlu et al., 1996;Karaman et al., 2012).This depends on the copper-based pesticides used.Kochian (1991) reported that 98% of Cu in the soil solution forms a complex with organic compounds and therefore it is immobilized (Kacar and Katkat, 2010).In addition, Haldar and Mandal (1981) reported that Zn ++ and Cu ++ , which are present in excessive amounts in the soil, adversely affect their intake by plants (Kacar and Katkat, 2010).No application proposal was needed because it was found to be sufficient in almost all soil samples (Hacısalihooğlu et al., 2004). In the survey, clay was detected only in 20 samples (clay: 20-35%) (Figures 2 and 3).These fields are defined as SCL (Güneş et al., 2013).No clay-textures (clay >35%) were detected in any of the other units.In this respect, the research area was determined to be suitable for tomato production (Brady and Weil, 2008). Conclusion The main problem in the research area is that there may be problems related to the intake of P, Zn, Mn and Fe depending on the level of pH and lime.According to the research results, texture containing sand, loam and clay combinations except for 100% clay are suitable for tomatoes.Whereas at pH 6.5 and above 10% of lime; 1-2 ton.ha -1 of elemental powder sulfur or organic acids should be used and where pH is below 6.0, CaCO 3 or Ca(OH) 2 should be used depending on the pH level.Thus, the pH will be calibrated and antagonistic relationship between P, Zn, Mn and Fe will be prevented.Especially where the lime is above 10%, application of P is given locally without mixing to the soil, while Zn, Mn and Fe should be fed to the plants from the leaves.CEC was under 15 meq.100 g -1 in 63.17% of the research areas.For these areas, 20-30 tons.ha-1 leonardite should be used to increase the CEC values. Cu, Mn and Fe was enough with higher percentages (96.5, 88.6 and 71.1%) of the soil samples, respectively.Zn was found to be low and very low in 76.3% of the samples.Because there is a more important antagonistic relationship between the Zn with pH, % CaCO 3 and P, Zn is found to be low; so the application must be made from the leaves. Higher Mn is related to rich mangan soils of Turkey, while higher Cu is related to the copper element in the compositions of pesticides.The state of Fe also depends on high iron application.Therefore, when Cu and Mn are not given, Fe should be applied to the leaf.If the recommendations are followed, the pH and CEC in the research area will be adjusted and P and Zn intake will be easier.In addition, the nutrition problem of tomatoes will be eliminated and the yield will be increased. Figure 1 . Figure 1.Geographical location and map of the research area. Figure 2 . Figure 2. Mappings according to levels of research findings. Figure 3 . Figure 3. Analysis results of the soils samples according to coordinates. Table 1 . Analysis results of research area samples according to coordinates.
2019-04-03T13:09:01.547Z
2018-12-13T00:00:00.000
{ "year": 2018, "sha1": "611dfd34743739351519bf2cb24740d2a998f85a", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/56B087159500.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2adacc800956cfc04be90c133eeca35dacfbd217", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
212858993
pes2o/s2orc
v3-fos-license
African Diasporic Choices-Locating the Lived Experiences of Afro- Crucians in the Archival and Archaeological Record The year 2017 marked the centennial transfer of the Virgin Islands from Denmark to the United States. In light of this commemoration, topics related to representations of the past, and the preservation of heritage in the present -entangled with the residuum of Danish colonialism and the lasting impact of U.S. neo-imperial rule -are at the forefront of public dialogue on both sides of the Atlantic. Archaeological and archival research adds historical depth to these conversations, providing new insights into the lived experiences of Afro-Crucians from enslavement through post-emancipation. However, these two sources of primary historical data (i.e., material culture and documentary evidence) are not without their limitations. This article draws on Black feminist and post-colonial theoretical frameworks to interrogate the historicity of archaeological and archival records. Preliminary archaeological and archival work ongoing at the Estate Little Princess, an 18th-century former Danish sugar plantation on the island of St. Croix, provides the backdrop through which the potentiality of archaeological and documentary data are explored. Research questions centered on exploring sartorial practices of self-making engaged by Afro-Crucians from slavery through freedom are used to illuminate spaces of tension as well as productive encounters between the archaeological and archival records. Introduction The year 2017 marked the centennial transfer of, the now, U.S. Virgin Islands from Denmark to the United States. Though entangled with the residuum of Danish colonialism, and the lasting impact of U.S. neo-imperial rule, topics related to representations of the past, the preservation of heritage in the present, and the contemporary politics of remembrance are at the forefront of public dialogue on both sides of the Atlantic in light of this commemoration. Archaeological and archival research adds historical depth to these conversations, providing new insights into the lived experiences of African descendant people in the Caribbean from enslavement through post-emancipation. Moreover, this archipelago specific line of research addresses a gap in the literature, as very little archaeological work has explicitly focused on the experiences of enslaved and later free Afro-Caribbean people in the former Danish West Indies (e.g., Blouet, 2013;Lenik, 2004;Odewale, 2016). Historical archaeological work throughout the Circum-Caribbean tends to favor Anglo-American, British, French, and Spanish occupied sites. This favoritism may partly be due to more widely accessible archival collections on these colonial sites. However, the Danish National Archive, the Photo and Map Collection at The Royal Danish Library, as well as other archives and collections in Denmark, undertook multi-year initiatives to digitize archival records regarding Denmark's role in the Transatlantic Slave Trade. The National Archive alone has uploaded more than 5 million digital scans of documents, making possible new avenues of inquiry from scholars across the world. However, open access to newly digitized documentary sources is not without challenge. As will be explored in depth below, open access does not equate to equitable legibility of documents available, nor does it provide transparency regarding the subjective nature inherent in the production of archival collections and within processes of digitization. Scholars need to interrogate the genesis of data sources (i.e., the archaeological record and archival record) used, as these tensions within their creation are liable to be reified in interpretative frameworks that shape the historical narratives. Through an examination of preliminary archaeological work taking place at the Estate Little Princess, an 18 thcentury Danish sugar plantation located on the island of St. Croix, USVI, this article explores the potentiality of archival and archaeological sources to examine past lifeways of Afro-Crucians from slavery to freedom through the lens of dress practices. This paper directly addresses the power that recovered historical material culture and documentary sources wields in the construction and dissemination of history by focusing on the history of Danish colonial spaces and the people that occupied those spaces. The archival and archaeological records on the Transatlantic Slave Trade are spaces of confinement and liberation within the production and dissemination of historical narratives about the lives of enslaved, free, and later emancipated African Diasporic people in the former Danish West Indies. The work taking place at the Estate Little Princess is a Black Feminist archaeological gesture towards redress, reckoning with permutations of epistemic violence within the archaeological and archival record. Epistemic violence, as Gayatri Chakravorty Spivak (1993) reminds us, is a colonial production that works to enact violence on subjugated people by legitimating certain knowledge forms and disavowing others. The result of epistemic violence is the proliferation of silences that truncate and conceal the experiences of African Diasporic people within the archival and archaeological records of enslavement in the former Danish West Indies. Silences are not innocuous. In this article, I suggest that silences are indicative of operations of power and oppression inherent in the creation and dissemination of a nationalist narrative of "innocence," as it pertains to Denmark's involvement, subsequent divestment, later denial and now neo-liberal engagement with the Transatlantic Slave Trade and its afterlives. Nordic countries in the last decade have begun a process of reckoning with their involvement in the Transatlantic Slave Trade, most notably in the scholarship on Afro-Swedish experiences (Adeniji, 2014;Cuesta & Mulinari, 2018;McEachrane, 2012McEachrane, , 2014Miller, 2017;Osei-Kofi et. al., 2018;Sawyer & Habel 2014). In her work Figuring Blackness in Sweden, Monica Miller notes Sweden's investment in a narrative that positions the nation as "morally superior and advanced, having avoided the most direct political, social, and cultural consequences of twentieth-century Europe's most significant upheaval" (Miller, 2017). Miller goes on to state that the result of this narrative is the ideology that "racial problems happen elsewhere" (Miller, 2017). Afro-Nordic Experiences, an anthology edited by Michael McEachrane (2014), is part of the new wave of interdisciplinary scholarship exploring the present-day experiences of people of African descent in Nordic countries. Within the anthology, scholars explicitly tie the experiences of Afro-Nordic people to Nordic countries' involvement in the enslavement of millions of Africans throughout the 17 th , 18 th , and 19 th centuries. Though not to the extent of their Swedish counterparts, Denmark has followed this wave, acknowledging that the agricultural prosperity of its colonies during the 18 th and 19 th centuries --conscripted by enslaved labor and the decimation of native populations --made it one of the wealthiest nations in Europe. Part of Denmark's reckoning with its past has included public programming, state-funded exhibitions, and a public acknowledgment of its role in enslaving Africans in the former Danish West Indies. These actions are not without complications. It is through these very actions that Denmark continues to position itself as "morally superior and advanced," having possessed the ability to move forward as a country not troubled and defined in terms of race, but unified through an ideology of "nation." As Miller stated in regards to Sweden's national narrative, Denmark has produced a narrative invested in an understanding that "racial problems happen elsewhere." This understanding that "racial problems happen elsewhere" upholds a notion of colonial "innocence." Lill-Ann Körber (2018) explores a notion of "innocent colonialism" as it relates to Denmark's recent engagement with its colonial past. "Innocence" for Körber is seen through Denmark's "reluctance…to acknowledge accountability, guilt, or debt" for its involvement in the Transatlantic Slave Trade and the subsequent aftermath of such wrongdoing, experienced by marginalized peoples in Denmark and in their former colonies (Körber, 2018, p. 27). This propagation of innocence undergirds Denmark's hegemonic national narrative and acts as a form of epistemic violence that invalidates and erases the outcries of Afro-Danish people who vocalize the myriad of ways in which processes of racialization and institutional forms of racism structure their everyday lives (Danbolt & Wilson, 2018). The rise in academic scholarship (Danbolt, 2017;Jensen, 2018;Körber, 2018;Simonsen, 2007) and visual and performing arts (see works by La Vaughn Belle & Jeanette Ehlers, 2018) concerning the 2017 commemoration, along with the rise of the Movement for Black Lives in Denmark (Danbolt & Wilson, 2018) peels back the veneer of Danish society, exposing linkages between past and present African Diasporic experiences. Recent historic archaeological investigations taking place on the islands of St. Croix (Blouet 2013;Dunnavant et al., 2018;Lenik, 2004;Odewale, 2016), and St. John , also cannot be divorced from the 2017 commemoration of the centennial transfer. Both the archaeological and archival records are part of the evergrowing tool kits from which scholars and artists are pulling to explore the era of enslavement and its afterlives on both sides of the Atlantic. This article is part of the afterlife of slavery, and a form of "wake work," pulling from Christina Sharpe (2016), that tends to the dead by grappling with the myriad of ways slavery ruptures the present. Sharpe states that "in the wake, the past that is not past reappears, always, to rupture the present" (2016, p. 35). Dilapidated coral-constructed windmills, "great houses," factories, and enslaved village domestic structures are omnipresent and hyper-visible throughout the island of St. Croix, making Sharpe's words even more pertinent. The 17 th -, 18 th -, and 19 th -century architectural reminders of the plantocracy, that rest as the foundation of the post-oil refinery derelict economy of the present, ruptures the façade of "innocent colonialism," rendering the present illegible without daily confrontations with the past. Historical documentation coupled with material culture allows scholars an avenue of redress in the wake, as we attempt to provide more flesh to the historical narratives of African Diasporic and European peoples who lived and labored during the era of Transatlantic Enslavement. The more than 5 million documents disseminated through The National Archive digital repository help make this wake work possible, as scholars explore the historical impacts of racism, classism, and sexism on present-day Afro-Virgin Islanders and Afro-Danish people as symptomatic of the afterlife of slavery. However, digitization efforts should not go uninterrogated, as they too contain within them vestiges of colonial guilt, making it impossible to untangle the processes of their creation from the social context in which they were produced. In the following paragraphs, I offer a critique of the archaeological and archival records while simultaneously illumining them as spaces of potentiality and possibility to gain insights into the "interior lives," to pull from Toni Morrison (1990), of the enslaved and later free African diasporic peoples of the former Danish West Indies. This article works to blur disciplinary boundaries intentionally. While I acknowledge that the physicality, processes of creation, and methodology of retrieval for the two data sources are very different, I call into question the seemingly unquestioned historicity of the archaeological and archival records. Within this interrogation, I examine my confrontation with the digital archive, question the seemingly objective nature of digitization processes, and illuminate the messiness of material culture recovered from the Estate Little Princess in an attempt to locate the past lived experiences of African Diasporic women in the former Danish West Indies. Locating Voices of African Diaspora Matter Within the Archives Within my research at the Estate Little Princess, I am interested in generating a gendered history of the former Danish West Indies, asking specifically about the past lived experiences of African Diasporic women at the site. As a scholar of the Transatlantic Slave Trade, attempting to locate the historical narratives of African Diasporic women necessitates an engagement with the historiography of the Americas. Work from Black Feminist social historians (Berry, 2007(Berry, , 2017Finch, 2010;Fuentes, 2016;Lindsey & Johnson, 2014;Stevenson, 2007) illuminates how the historiography of the Americas is produced through the creation and reification of epistemic violence within the archive. I would add to this significant work that the archaeological record, along with historical narratives derived from its interpretation, also produces a reification of epistemic violence. The ongoing wave of post-processualism within the field of archaeology, primarily feminist archeological studies (Claassen, 1992;Conkey & Spector, 1984;Gero & Conkey, 1991) and studies on race, racism and racial politics (Epperson, 1999(Epperson, , 2004Franklin & Paynter, 2010;Mullins, 1999Mullins, , 2001Mullins, , 2012Orser, 1998Orser, , 2004, attempts to address formations of epistemic violence within the archaeological record. The archival and archaeological records are often spoken of as intrinsically different, produced, and studied through different methodologies. However, for a moment, I want to posit that one of the connecting threads between them is the pervasive ways in which epistemic violence structures all levels of their production and subsequent study. While there is seminal scholarship that interrogates the production of the archaeological record (Conkey & Gero, 1997;Conkey, 2007Conkey, , 2003Engelstad, 2007;Voss, 2006;Wylie, 2007), I have found that the work of social scientists, social historians, and digital humanities scholars who study Transatlantic Enslavement offers archaeologists intersectional tools to aid in the location and positionality of past African Diasporic lives in the archaeological record. Saidiya Hartman's (1997, 2019 work provides a methodology for the study of the archives through a discussion regarding how scholars encounter African Diasporic experiences in the archive. Hartman's (1997) work on archival production illustrates how African Diasporic women occupy spaces of silence and have been subject to erasure within archives on the Transatlantic Slave Trade. Processes of erasure in the past and silences that pervade the archive in the present are symptomatic of ongoing iterations of epistemic violence. For Hartman, the historiography of the Transatlantic Slave Trade is grounded in epistemologies of the "fort or barracoon," that focus on the quantitative. This focus on the "fort or barracoon" renders Black bodies as commodities within the historiography of the Americas and reflects inequalities that thrive within the present. Jessica Marie Johnson expands on Hartman's notion of fort and barracoon epistemologies rooted in the quantifiable by examining studies of enslavement at the "digital crossroads." Johnson warns that the term data gestures "to the rise of the independent and objective statistical fact as an explanatory ideal party to the devastating thingification of black women, children, and men" (2018, p. 58). The notion of "thingification" that Johnson articulates, pulling from Marxism, is the result of fort and barracoon epistemologies. I argue this "thingification" is a point of slippage for archaeologists who come to study people through the materiality of their lived experiences. The space between studying things and the "thingification" of the people we study is a space of moral and intellectual tribulation for archaeologists that study the African Diaspora (Battle-Baptiste, 2012). Data science has long ingrained itself within archaeological methodology; however, new waves in digital humanities, the drive for open-source data, and large data sets for intra-and cross-site comparative analysis brings the warning of "thingification" to full view within the field. Hartman and Johnson act as reminders for archaeologists that we must make sure that data science, as a tool, is not utilized to reify silence and erasure by replacing the flesh, voice, and lived experience of those we examine with statistically significant artifact variations and distributions. Vestiges of epistemic violence also occur in the archival record, especially as archival collections undergo largescale digitization projects. Hartman's and Johnson's call to interrogate the seemingly objective nature of archives and the production of quantitative datasets resonate within digitization efforts that result in open-access ecatalogs comprised of reference numbers that numerically link physical objects (i.e., diary, plantation ledger, photograph, painting) to their digital surrogates. As mentioned above, open-access and equitable accessibility to knowledge do not always equate. Accessibility to open-access e-catalogs, such as the 5 million documents disseminated through The National Archive digital repository, still requires specific hardware, such as a desktop computer with high-speed internet, that can access and download digital files. What this means is that those who do not have access to this hardware -for example, due to geographic location or social-economic statusdo not have access to open-access e-catalogs. Additionally, processes of digitization that allow copies or digital surrogates of a physical object to be available online are made possible through a process of social meditation. The result of this social meditation is the production of a digital surrogate that is propagated as an objective facsimile of the original physical object. However, the process of social meditation, which is shaped by the social context in which a digital surrogate is produced, questions the notion of objectivity. The result of social mediation is the creation of a digital surrogate that is laced with subjectivity, steeped in the decision making processes of collection managers that are often reflections of institutional values. As I have outlined above, government-funded institutions uphold the values of the Nation. In the case of Denmark's National Archive, the result is a production of digital surrogates laced with notions of "colonial guilt" and "innocence" that reproduce silences and erasures in the archival record. In the example that follows, a digital surrogate is examined, and the affordances (searchability, annotations, metadata) of the surrogate are interrogated to illuminate slippages in the historicity of digital collections. Locating erasures within processes of historical production in the archaeological and archival records matters. This article seeks to illuminate fundamental ambiguities within the historiography of the Atlantic World. Michel-Rolph Trouillot (2012, p. 02) stated that the production of history is an ongoing process where actors and narrators create "both 'what happened' and 'that which is said to have happened." Trouillot (2012, p. 26) interrogates the production of history, stating that: "Silences enter the process of historical production at four crucial moments: the moment of fact creation (the making of sources); the moment of fact assembly (the making of archives); the moment of fact retrieval (the making of narratives); and the moment of retrospective significance (the making of history in the final instance)." Examining Denmark's state-funded archival collections, explicitly accounting for national interest, requires scholars to take up Trouillot's call to critically interrogate the production of archival collections. Trouillot positions the creation and dissemination of history as always already subjective. For example, in the winter of 2018, I was sifting through the Danish Royal Library's digital photography collection online. I queried their collection for images of the Danish West Indies and came across postcards of African diasporic people from the 19 th and 20 th centuries. One image of particular interest, given my research pursuits, was an early 20 th -century postcard that featured a young woman of African descent from the Danish West Indies (Figure 1). The postcard was cataloged under the title "Ung Pige" (Young Girl). The digital object I viewed had limited affordances, not containing any searchable keywords that would denote the race (i.e., Black, Negro, Slave, Enslaved, Creole) of the person in the image. It quickly came to my attention that none of the digitized postcards in the collection contained searchable keywords affiliated with racial designations. As a result, I would have to view each image to subjectively determine who could have been interpolated as being of African descent. Upon viewing the back of the postcard titled "Ung Pige," I could see that before the postcard's digitization, someone wrote on the card in pencil "Ung Neger Pige" (Figure 2). While the Danish term "Neger" could be regarded now as a derogatory descriptor for someone of African descent, I found it interesting that rather than catalog the postcard with a 21 st century politically correct racial signifier, all racial signification was erased. This cataloging practice is found throughout the collection. My mind raced with questions. How can an archival collection of 19 th -and 20 th -century postcards of the Danish West Indies, a society imbued in contentious processes of racialization, come to be cataloged in the 21 st century without any mention of racialized identifiers? What mattered in the process of "fact assembly" when it came to how images and documents are cataloged? Historical figures, those who were of European and African descent, had no cataloged racial signifiers. I highlight this example of racial erasure in an attempt to demonstrate how Denmark's investment in a narrative of post-racial innocence produced an archive on the Transatlantic Slave Trade that stripped historical actors of their racial identifiers. However, race as a social construct and racism as an experiential fact mattered in the past and matters in the present. Not acknowledging racial distinctions does not change that fact; instead, it blurs, conceals, and confines the different lived experiences of peoples of African descent on the island. Expanding on these challenges, specifically as it relates to uncovering the experiences of women in the archive, Hartman (2008, p. 3) states that researchers come to the Black feminine body in the archive through "little more than a register of her encounters with power" and that these encounters provide "a meager sketch of her existence." The digitization of millions of new historical documents from repositories in Denmark provides more avenues for these encounters within historical photography, probate records, runaway slave advertisements, and lists of Afro-Crucian female property owners in "Free Gut," a section of St. Croix located in Frederiksted, to name a few. However, practices of erasure make it much more challenging for scholars querying online databases for documents. These newly accessible digital repositories make available formally uncharted spaces of inquiry. It is within the uncharted that Black women emerge, albeit obscurely. Through an examination and critique of Denmark's digital archival repositories in the following section, I chart a course through documentary sources illuminating fleeting encounters with African Diasporic people in order to illuminate spaces of challenges within spaces of possibility. Unlikely Entryways: Textiles, Ship Logs, and Modes Of Sartorial Surveillance My interest in sartorial practices as a lens through which one can examine the complex interplay between agency and structure lead me to archival documents in the hopes of uncovering information regarding every-day dress practices engaged by Afro-Crucians during the 18 th , 19 th , and 20 th centuries. Below, I discuss my entryway into Danish archival repositories and historical documents that demonstrate shifts in sartorial practices over time and the unexpected social impact they had in the Transatlantic World. During my archaeological field season at the Estate Little Princess in the summer of 2018, I was fortunate to meet Dr. Katrine Dirckinck-Holmfeld, a scholar and artist whose work blurs the boundaries of archive, memory, imagination and the digital sphere. Our meeting was facilitated through a program hosted by the Crucian Heritage and Nature Tourism organization (CHANT), which brought several Danish scholars to the island of St. Croix. CHANT hosted several public programming events that highlighted research on Danish colonialism conducted by scholars on the other side of the Atlantic. After an artist lecture, Dirckinck-Holmfeld and I began talking more about interesting finds we came across while sifting through The Danish National Archives digital repository. It was during this conversation that she pulled out her cell phone and showed me an image she had saved of an archival document found among newly digitized ship logs (Figure 3). The document had 17 patterned textile fragments adhered to it. Dirckinck-Holmfeld explained to me how the ship log document was used to record which patterned bolts of cotton cloth would be traded by the Danes in Ghana for enslaved Africans that would then be transported to their colonies in the Caribbean. Initially, I was amazed because a cell phone had delivered a 300-year-old document, demonstrating the ways the digital sphere bridges oceans and traverses time. Secondly, I was enthralled because, as an archaeologist that focuses on adornment working in the Caribbean, I rarely get the chance to come across textiles used in the production of clothing. Instead, we often recover the clothing fasteners that would have held textiles, cut, and sewn into clothing garments together. Overall, I was curious about the ways the document overlapped with my research regarding Danish colonial sumptuary laws that demarcated what enslaved and legally free Africans could and could not wear in the Danish West Indies. I found it interesting that the types of textiles traded in Ghana for enslaved Africans were the same textiles that, through Danish sumptuary laws, were demarcated for people of African descent in their colonies to wear. I believe these codified racial distinctions through a technology of seeing, where certain patterned textiles signaled racial, class, and status differences on both sides on the Atlantic. The types of textiles the Danes were trading; specifically, the variety of patterns, are seen in several examples of colonial sumptuary laws that were implemented throughout the Americas. These laws included restrictions on 3D supplements added to the body (i.e., clothing, jewelry, hair adornments) along with restrictions on bodily modifications, specifically how one can style their hair. Laws like these were used to demarcate social differences by regulating appropriate types of dress based on the race, gender, class, and status of the colonial subject (Stoler, 2001, p. 836;Wiecek, 1977, p. 268). These laws worked to produce a technology of seeing and interpolating others through dress. Specific to the Danish West Indies is Governor-General Schimmelmann's 1786 sumptuary ordinance. Historian Neville T. Hall has translated Schimmelmann's ordinance, stating that the law dictated that "Plantation field slaves were allowed coarse cotton or linen for daily use and, as a concession for Sundays and public holidays, cast-offs of little value" (Hall, 1992, p. 94). Hall goes on to translate the ordinance stating that the law forbid enslaved Africans from wearing "jewelry of precious stones, gold or silver, material of silk, brocade, chintz, lawn, linen, lace, or velvet; gold or silver braid; silk stockings; elaborate up-raised hairstyles, with or without decoration; or any form of expensive clothing whatsoever" (Hall, 1992, p. 94). Enslaved and free Africans were permitted to wear wool, cotton, coarser varieties of lace, and silk ribbon of Danish manufacture. The ordinance also outlined how violators of the sartorial pronouncements would receive 50 lashes. The practice of legally attempting to demarcate difference through appearance has a long history, with sumptuary laws implemented in colonial-era New York (Bianco et al., 2006), South Carolina, and Spanish Florida (Stoler, 2001, p. 836). These sumptuary laws were a means through which the Danish government attempted to regulate dress, and demonstrate how sartorial practices act as mechanisms through which methods of racialized surveillance were used in the past. Pulling from the work of Simone Browne (2015) in Dark Matters, racialized surveillance "is a technology of social control where surveillance practices, policies, and performances concern the production of norms pertaining to race and exercise a power to define what is in or out of place" (Browne, 2015, p.16). By legislating sartorial practices, sumptuary laws codified who was "in or out of place." While there is no strong evidence that these laws were heavily enforced (Hunt, 1996, p. x), their existence stands as evidence for colonial attempts to maintain social control in the Danish West Indies. Browne also outlines how tactics of racialized surveillance necessitate avenues for the production of "dark sousveillance." Dark sousveillance is theorized as practices engaged in by free, enslaved and later emancipated Africans, that push against tactics of racialized surveillance that are inherently anti-Black and can "appropriate, co-opt, repurpose, and challenge in order to facilitate survival and escape" (Browne, 2015, p. 16). I posit that quotidian sartorial practices offer an avenue through which scholars can examine everyday engagements in dark sousveillance. The archive offers several entryways to explore the implantation of sumptuary laws as a method of racialized surveillance, as well as possible tactics of dark sousveillance engaged in by African Diasporic people from slavery through freedom in former Danish West Indies. A space of possible inquiry in the archive to assess the extent to which sumptuary laws were enforced would be exploring the now thousands of digitized pages of police records from the Danish West Indies found through the Danish National Archive online repository. These records may illustrate to what extent colonial-era sumptuary laws were enacted over time, and if enforced, who were often the culprits of such "crimes." The emphases on exploring the life cycles of such legislation derive from examples of colonial-era sumptuary laws experiencing a resurgence in the southern United States at the turn of the 20 th century (Flewellen, 2018;Sitton & Conrad, 2005). Police records may act as an entryway for scholars to explore sumptuary laws over time and test whether the implantation and enforcement of such laws follow social trends or movements. The Materiality of African Diasporic Sartorial Practices: A Case Study at the Estate Little Princess. My work at the Estate Little Princess (ELP) adds to the growing historiography on the former Danish West Indies, adding material culture data to the history of Afro-Caribbean past lifeways. St. Croix, the largest of the three islands that once comprised the former Danish West Indies, has a long colonial history beginning with Spanish colonists in 1493. Since then, it has been ruled by seven different colonial flags, with its most prolonged occupation under Danish rule from 1713 to 1801 and from 1815 to 1917 (Hall, 1992;Odewale, 2016;Tyson, 1992). ELP is located approximately 1.75 miles west of Christiansted Harbor on the north coast of St. Croix ( Figure 4). A former Danish sugar plantation, the estate was established in 1749 by Frederik Moth, the first Danish governor of St. Croix (Tyson, 1992(Tyson, , 2010Tyson & Highfield, 1994). The plantation was purchased and sold by several European descendant planters throughout the 18 th , 19 th , and 20 th centuries. After the estate was no longer actively producing sugar, it was acquired by Clayton and Opal Shoemaker as a summer home in 1949 (Wright et al., 1980). After the deaths of the Shoemakers, the plantation was willed to The Nature Conservancy (TNC) in 1991, and the site is now the TNC's Virgin Islands and Eastern Caribbean programs headquarters (The Nature Conservancy, 2017). The purpose of this article is not to provide a detailed history of the European descendant estate owners, but to discuss the lived experiences of African descendant people at the site. As a result, a brief history is provided here; however, limited archival (Tyson, 1992;Wright et al., 1980) research has been done about the former owners of the estate. The buildings constructed at the Estate Little Princess cover a history that spans over 200 years (Wright et al., 1980). The architectural remains at the estate include three houses, a sugar factory/distillery building, a sugar mill, a well tower, and several outbuildings including the remains of an enslaved village, later known as a free laborer village (Dunnavant et al., 2018). Archival research indicates that by 1786, 127 enslaved Africans labored at the site and lived in 53 houses that comprised the enslaved village area. Of the 53 domestic structures, 25 of these were masonry made, while the remaining 28 were "wattle village houses," making the enslaved village area architecturally diverse. A watchhouse noted as 'slavevagterbusene' in estate inventories was also originally constructed at the estate but was removed and reconstructed at the Whim Museum, a public heritage site owned and operated by the St. Croix Landmarks Society. The architectural remains of the ELP were added to the National Register of Historic Places in 1980 (Dunnavant et al., 2018). The estate is a rural coastal plantation that, at its height in 1772, harvested 130 acres of sugarcane with the 141 enslaved Africans who labored in the agricultural fields and the rum distillery. Sugar production was arduous and dangerous work for the coerced enslaved labor force in the Caribbean. Enslaved and later free Afro-Caribbean at the Estate likely worked from sunup to sundown, only to then return to their own homes to complete the labors of housekeeping as well as maintenance of their own subsistence farm. The plantation was dedicated predominantly to the production of cane sugar, which was continuously cultivated and processed on-site until the early 1920s, making the estate one of the last operating sugar plantations on the island of St. Croix (Wright et al., 1980). The site remained continuously occupied from 1749 through the 1960s (Tyson 1985;Wright et al., 1980). Decades of soil derogation along with devastating hurricanes in the late 19 th and early 20 th centuries results in the decline of sugar production at the ELP in the early 1900s. The last recorded sugar cane harvest was a mere 50 acres in 1942 versus 155 acres of harvested sugar cane 128 years prior. In addition to nutrient-deficient soils, increased production costs after the abolishment of slavery in 1848, and the ever-pervasive threat of natural disaster, the demand for cane sugar began to wane in favor of beet-based sugar. By the 1950s, the buildings-which had long been in disrepair--started to collapse and succumb to storm damage that occurred as a result of a hurricane that struck the island in 1928. Ongoing archaeological work at the estate included a reassessment of the site after the landfall of both Hurricanes Maria and Irma in 2017 (Dunnavant et al., 2018). There are ongoing efforts to rehabilitate the historic structures identified at the Estate Little Princess conducted by TNC. Ongoing archaeological work at the Estate Little Princess is part of an award-winning project built in collaboration with the Society of Black Archaeologists (SBA) and the Slave Wrecks Project (SWP), an international collaboration between the Smithsonian's National Museum of African American History and Culture, the National Park Service (NPS), and the George Washington University Capitol Archaeological Institute. Founded in 2011, the SBA is a nonprofit organization dedicated to advocating for proper treatment of African and African diaspora material culture through the promotion of academic excellence and social responsibility (Odewale et al., 2018). Examining Sartorial Practices During the enslavement and post-emancipation eras in the Danish West Indies (1733-1917)--periods marked by racialized servitude, sexual exploitation, and economic disenfranchisement--Afro-Crucians were styling their hair with combs, lacing glass beads around their necks, dyeing coarse-cotton fabric with indigo-berry and vine sorrel, and fastening buttons to adorn their bodies and dress their social lives. Through my work, at the ELP, I posit that quotidian sartorial practices, how people dressed their bodies for their everyday lives, are practices of self-making that, through their repetitive daily engagement, constitute the body and form identities. Building off the work of Mary Ellen Roach-Higgins and Joanne B. Eicher (1992), I define sartorial practices as social-cultural practices, shaped by many intersecting operations of power and oppression, including racism, sexism, and classism, that involve modifications of the corporal form (e.g., scarification, body piercings, and hair alteration), and all three-dimensional supplements added to the body (e.g., clothing, hair combs, jewelry). While much of the archaeological work on adornment practices among Afro-Americans (African and African-descendant people in the Americas) focuses on the era of enslavement, little archaeological work examines sartorial practices as an avenue for identity formations from slavery through freedom. The ELP, with an occupation that spans over 200 years, makes an excellent case site to explore this change over time. This research builds on three bodies of interdisciplinary scholarship: archaeological analyses of adornment, Black feminist theory, and historical archaeology of enslavement and post-emancipation. Within historic archaeological scholarship on adornment, the multivalent meanings behind artifacts recovered in the archaeological record that relate to dress practices are tools for the formation of identity (Beaudry, 2006;Fisher & Loren, 2003;Galle, 2004;Heath, 1999Heath, , 2004Loren, 2001Loren, , 2010Thomas & Thomas, 2004;White & Beaudry, 2009). I argue that beads, buttons, rivets, suspenders, bodices, hairpins, and hook-and-eye closures are some of the material culture data that, alongside documentary data, serves as evidence of sartorial practices of selfmaking that form identity and constitute the body through daily iterative practice. Within this project, my conceptualization of processes of identity formation draws from Meskell's (2002) theorization of "iterative practices" where she states that "identities are multiply constructed and revolve around a set of iterative practices that are always in process, despite their material and symbolic substrata" (2002, p. 281). Pulling from Meskell, I argue that beads, buttons, rivets, suspenders, bodices, hairpins, and hook-and-eye fasteners are some of the "small things" that, along with documentary data, serve as evidence of "iterative practices" that comprise sartorial "practices of self-making" engaged in by individuals. The emphasis on intersecting operations of power and oppression, including racism, sexism, and classism within my definition of dress draws from Black feminist theory. Black feminist theory--specifically the usefulness of intersectionality as an analytical tool--comes from Kimberlé Crenshaw's (1991) theorization of intersectionality, which locates the positionality of Black women in particular at the intersections of gender, race, and class operations of power and oppression. Black feminist archaeology aids in the interpretation of African Americans' past lived experiences as wholly complex rather than compartmentalizing multiple facets of Black experiences (Agbe-Davies, 2001Battle-Baptiste, 2012;Franklin, 2001; see also Wilkie, 2003Wilkie, , 2004. From its inception, historical archaeology of African diasporic past lifeways concerned itself with identity politics (Davidson, 2004;Fairbanks, 1974;Ferguson, 1980;Franklin, 2001;Otto, 1980;Singleton, 1998). Archaeological investigations of the African diaspora have expanded beyond the U.S. to include the West African coast (Decorse, 2001;Kelly, 1997), South Africa (Hall, 1987(Hall, , 1993, Brazil (Funari, 1999;Orser & Funari, 2001), and the Caribbean (Armstrong & Kelly, 2000;Armstrong & Mark, 2003;Bates et. al., 2016;Singleton, 2015). However, while the Caribbean has become the site of more archaeological excavation, work tends to center Anglo-American, British, French, and Spanish sites of enslavement and post-emancipation. My research at the Estate Little Princess brings a site of enslavement and post-emancipation from a Danish West Indian lens into conversation with scholarship on the Circum-Caribbean, providing data sets that can be placed in comparison with other sites for further analysis of African diasporic experiences. By integrating documentary and archaeological data, my current research at the Estate provides a framework for testing inferences about the relationship the matrix of domination has to the formation of identity through the lens of dress, across space and time in the former Danish West Indies and the broader Atlantic world. Archaeological excavations at the site during the summer of 2017 and 2018 resulted in the recovery of over 16,000 artifacts with shovel probes unearthing material culture (e.g., ceramics, glass, metal) pertaining specifically to the era of enslavement and post-emancipation. This work is preliminary with plans to continuing excavating at the enslaved and later free laborer area for another 3-4 years. Conclusions: The Potentiality of the Archaeological and Archival Record: The Syntheses of Material Culture and Documentary Data I am opening this conclusion with a circa 1890 image of two-house servants who labored at the Estate Little Princess ( Figure 5). The man wears a top hat, trousers, and a buttoned froc. The woman has her hair pulled back and covered with a scarf. She wears a short gown, a long petticoat that falls to her ankles with an apron. Their hands are intertwined as they stare back at the camera. The back of the photograph reads, "Nanna Hetta about 1890." Less than 50 years after the abolition of slavery in the Danish West Indies, I wonder how Nanna Hetta, her ancestors and her descendants, who labored and lived at the Estate Little Princess, constituted their existence through everyday, quotidian practices of self-making. My interest in examining sartorial practices of self-making among Afro-Caribbean people on the island of St. Croix lead me to explore avenues of inquiry in the archaeological and archival record. With the Estate Little Princess as a case study, through a synthesis of material culture and documentary data, I will explore sartorial practices over time as an avenue through which to explore the complex entanglements of structure, agency, and the enduring legacy of enslavement. Analysis of material culture data for my research will consist of me cataloging and analyzing material culture recovered from the ELP to the standards of the Digital Archive of Comparative Slavery (DAACS). DAACS host a digital, relational (SQL) database of excavation and artifact information that can be queried online, making archaeological data on the African diaspora widely accessible. Using the DAACS database guarantees that material culture recovered from the ELP is cataloged and analyzed systematically to allow intra-site and cross-site comparison. Through the use of SQL, I will analyze clothing and adornment artifacts at the ELP by querying the assemblage for patterns of archaeological variation to assess frequencies in the distribution of artifacts as a means of inferring the acquisition and disposal of clothing and adornment goods and potential shifts in dress practices. Material culture data recovered will be analyzed to determine what effect, if any, operations of power and oppression had on patterns of discard, the cost of goods, market accessibility, and aesthetic valuation. Discard patterns within the archaeological record may provide inferences regarding the aesthetic choices people at the ELP were making in regards to dress over time. In addition to material culture recovered, I will create a database that includes all references to adornment and clothing, including clothing type, decoration, and the rate of appearance over time from documentary sources. I will collect and analyze historical photographs and postcards held in the Danish Royal Library as well as the Danish National Archives that provide documentary data regarding late 19 th and early 20 th centuries dress practices. Additionally, I will utilize the Royal Danish Library digital newspaper collection to analyze runaway slave advertisements in six volumes of The Royal Danish American Gazette (1770-1801), five volumes of The Royal Saint Croix Gazette (1813-1815), 5,824 volumes of the St. Croix Avis , and 296 volumes of the Dansk Vestindisk Regierings Avis (1815-1843). These advertisements will provide data regarding Afro-Crucian's appearance when they absconded during enslavement ( Figure 6). I am currently in the process of creating a database that records attributes based on the advertisements, including whether clothing is mentioned and to what extent. This database will be used to assess clothing practices engaged in by enslaved Africans when they absconded while allowing for a comparison to what enslaved Africans were wearing to the criminalization of sartorial practices outlined in sumptuary laws. This database will create a baseline from which to draw comparisons to material culture recovered from the ELP and provide contextual data regarding quotidian dress practices among Afro-Crucians across time, ideologies of race, gender, and class, as well as legal practices of control and surveillance through dress. Through the use of material culture and documentary evidence, my research will shed light on hegemonic ideologies of gender, race, and class, as well as the pragmatic realities of the social and economic conditions of slavery and freedom through the lens of sartorial choice. Archaeological work, coupled with archival research, provides additional strands of data from which to test hypotheses about the methods of surveillance and practices of sousveillance. Together, documentary and material culture allow for a rich exploration into the materiality of sartorial practices at the interstices of structure and agency.
2020-02-13T09:03:27.656Z
2020-02-11T00:00:00.000
{ "year": 2019, "sha1": "1261f6accec847db3c00f84ee1f359b814f94827", "oa_license": null, "oa_url": "https://tidsskrift.dk/ntik/article/download/118481/166422", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aae21fa7ab5d6592f6a17d44e1b8485b1f49235e", "s2fieldsofstudy": [ "History", "Sociology" ], "extfieldsofstudy": [ "History" ] }
251266137
pes2o/s2orc
v3-fos-license
Study on the Effect of Early Comprehensive Intervention of Skin Contact Combined with Breastfeeding on Improving Blood Glucose in Early Birth of Newborns with Gestational Diabetes Mellitus Objective . To explore the value of early comprehensive intervention of skin contact combined with breastfeeding on improving early blood glucose in newborns with gestational diabetes mellitus (GDM). Methods . A total of 300 newborns from pregnant women with gestational diabetes who were hospitalized in Wuxi People ’ s Hospital from January 2021 to December 2021 were randomly assigned into the observation group ( n = 150 ) and the control group ( n = 150 ). The former group received early comprehensive intervention of skin contact combined with breastfeeding, and the latter group received postnatal naked contact, physical examination after late navel severing, and routine nursing intervention such as early contact and early sucking in 30 min. The peripheral blood microglucose value at 1 and 2 hours after birth, neonatal hospitalization rate, ear temperature of 30 min, 60 min, 90 min, and 120 min after birth, neonatal crying, incidence of postpartum hemorrhage, uterine contraction/wound pain index, lactation before delivery, immediately after delivery, early sucking 15min, and 2 hours postpartum were observed. Results . Compared to the control group, the values of trace blood glucose at 1 hour and 2 hours after birth in the observation group were higher, and the di ff erence between groups was statistically signi fi cant ( P < 0 : 05 ), the neonatal hospitalization rate in the observation group was lower, and the di ff erence Introduction Gestational diabetes mellitus (GDM) not only increases the adverse outcome of pregnancy but also brings many adverse effects on fetal growth and development and neonatal health. The incidence of postnatal hypoglycemia is significantly higher compared to normal newborns [1]. According to the 9th edition of the Global Diabetes Map released by the International Diabetes Federation in 2019, about 16.2% of women around the world give birth to live births with varying degrees of glucose metabolism disorders during pregnancy, of which GDM women account for about 84%. Meanwhile, statistics on the prevalence of GDM in different countries report that the overall prevalence rate of GDM in China is about 8.3% [2]. The incidence of GDM is different in different regions of China. The prevalence rate of GDM in southwest and northwest regions is 4%-5%, while that in densely populated North China, Central China, East China, and South China can be as high as 10% [3]. With the opening of the two-child policy in our country, the rising trend of the prevalence rate of GDM in the whole country is more obvious [4]. As we all know, GDM has adverse effects on the short-term and long-term health of mothers and infants and not only increases the risk of pregnancy infection, polyhydramnios, premature delivery, birth injury, and postpartum infection but also easily leads to neonatal hypoglycemia and other adverse conditions [5]. A report indicates that there is a high incidence of neonatal hypoglycemia in the early postnatal period in pregnant women with GDM, and the incidence of hypoglycemia will increase the risk of neonatal nervous system damage [6][7][8]. However, there are some problems in clinical hypoglycemia management program of newborn delivered by pregnant women with GDM (infants of diabetic mothers) (hereinafter referred to as IDMS), such as overfeeding, unreasonable blood glucose monitoring, quality standards, and monitoring process which are not standardized. Rationalizing IDMS feeding methods, effective and convenient prevention, and timely treatment of critical hypoglycemia, it is urgent to explore convenient and feasible intervention measures to prevent early hypoglycemia in gestational glucose newborns. Some studies have suggested that skin contact (skin-to-skin contact, referred to as SSC) is beneficial to reduce the incidence of neonatal hypoglycemia, but not enough to improve critical hypoglycemia or hypoglycemia [9]. The strategy guide for promoting breastfeeding points out that healthy newborns and their mothers, early sucking, and early contact newborns have higher levels of 75-180 min blood glucose after birth, but at present, early sucking can be completed in postnatal 30 min-1 h, so it is easy to miss the best dry expectation of IDMS to prevent hypoglycemia, and complicated nursing procedures also increase neonatal bad stress and energy consumption. Some studies have indicated that maternal and infant skin contact with 90 min immediately after birth or longer can simulate the maternal environment for newborns in time, and maternal skin temperature can more effectively reduce the blood sugar consumption caused by newborn's own heat production compared with radiation heating platform [10]. Therefore, this study enrolled 300 newborns from pregnant women with gestational diabetes who were hospitalized in Wuxi People's Hospital from January 2021 to December 2021 and voluntarily participated in this study. Skin contact combined with immediate breastfeeding intervention is expected to enhance the early blood glucose instability of newborns and avoid a series of hazards caused by blood glucose fluctuations and hypoglycemia, which makes blood sugar stable in the ideal state, smooth transition to a relatively safe period, resulting in more social value. Patients and Methods 2.1. General Information. A total of 300 newborns from pregnant women with gestational diabetes who were hospitalized in Wuxi People's Hospital from January 2021 to December 2021 were randomly assigned into the observation group (n = 150) and the control group (n = 150). Inclusion criteria are as follows: (1) single and full-term GDM (GDM) newborns; (2) pregnant women over 18-44 years of age; (3) normal uterine development, good health, and singleton; (4) no serious complications and complications during pregnancy, such as severe hypertension, thyroid, liver, and kidney diseases requiring drug treatment; (5) postnatal blood glucose level of 2.2-7.0 mmol/L; (6) effective breastfeeding can be carried out if the amount of breast milk is ≥ (+) within 2 hours after delivery; (7) the information of pregnant women is complete; (8) regular antenatal examination; and (9) informed consent to this study, willing to accept regular follow-up. Exclusion criteria are as follows: (1) pregnant women with cognitive impairment or abnormal behavior; (2) no effective means of contact; (3) prolongation of the first stage of labor and the second stage of labor; (4) cesarean section; (5) excessive fatigue of parturients; (6) no breast milk secretion within 2 hours after delivery; (7) poor nipple condition, newborns cannot suck effectively; (8) moderate and severe neonatal asphyxia; and (9) severe birth injury. Shedding criteria are as follows: (1) those who were asked to drop out of the research or refused to provide relevant information for various reasons or did not give birth in our hospital; (2) participated in other clinical studies during the survey; and (3) during the intervention, the parturient refused to continue to complete the observation index. Figure 1. Intervention Scheme. The control group was given naked contact after birth, physical examination after late umbilical cord amputation, and routine nursing intervention such as early contact and early sucking within 30 minutes. (1) After GDM pregnant women give birth, medical staff closely observe the delivery process and blood sugar changes of pregnant women in the delivery room, so as to stabilize the pregnant women's emotions and maintain the blood sugar of pregnant women at normal levels; (2) the ambient temperature before delivery is 26-28 degrees, and the preheating temperature of the radiation table is 32-34 degrees. BioMed Research International After birth, dry the whole body and head amniotic fluid, lie naked on the parturient's chest, pay attention to the whole body to keep warm, wear a hat, embrace the newborn with both hands, and cover the preheated towel quilt. Meanwhile, umbilical artery blood was taken for blood glucose monitoring for the first time. Newborns with postnatal blood glucose level of 2.2-7.0 mmol/L were enrolled in the observation group; (3) newborns after late umbilical cord amputation were routinely treated with umbilical cord and physical examination in the rewarming table and wrapped to keep warm after no abnormalities. Mother-to-infant skin contact and crawling breast search were carried out in 30-60 min; (4) if the postnatal umbilical artery blood glucose was higher than 2.6 mmol/L, the heel blood samples were collected at 1 hour and 2 hours after delivery to detect the peripheral blood glucose value after routine intervention; (5) those with blood sugar 2.2-2.6 mmol/L immediately after birth should be regularly fed with 10% glucose solution 10 mL/kg or infant formula 10 mL/kg when opening milk. The peripheral blood glucose was measured again at 30 and 60 minutes after intervention, and the blood glucose after intervention was lower compared to that after 2.2 mmol/L transfer to pediatrics. The scheme of the observation group is as follows: the observation group received early comprehensive intervention of skin contact combined with breastfeeding. (1) Routine nursing: on the basis of the treatment in the control group, the mothers and newborns were exposed to at least 90 min immediately after birth, and the biological nursing method was employed to assist the pregnant women and newborns to breastfeed immediately during the complete skin contact between the mothers and newborns. Among them, in biological breastfeeding strategy, when the mother is breastfeeding, both the mother and the newborn are in a relaxed posture, the lying or semisitting position; the head, neck, shoulder, and waist can be well supported. The newborn lies prone on the mother's chest, and the two bodies can fit closely under the action of gravity, from the whole area of the body from the sternum to the pubic bone, and the thigh of the newborn. Calves and tiptoes are spontaneously applied to the maternal body or part of the environment (beds, sofas, chairs, clothes on the bed, etc.), and newborns can fix themselves without too much help. There is no need for the woman to exert too much pressure on the head and back, which is easier to breathe and does not need to change positions too much. This method is helpful to stimulate the natural lactation and foraging behavior of parturients and newborns. Newborns approach the breast and take the initiative to make the head swing vertically and the limbs and body swing together and finally achieve independent milk. (2) Treatment of neonatal hypoglycemia: on the basis of treatment in the control group, complete skin contact between mother and infant combined with breastfeeding continued after each formula feeding. (3) Treatment of neonatal critical hypoglycemia: continuous complete skin contact between mother and infant and breast feeding in biological position. The peripheral blood glucose was measured again at 30 and 60 minutes after intervention, and the blood glucose after intervention was lower than that after 2.2 mmol/L transfer to pediatrics. Data collection, comparison of implementation effect and conclusion evaluation between the two Blod glucose level2.2-6mmol/L, contunuous skin contact, combined with effective and frequent breastfeeding, peripheral blood glucose was measured a er 30min, and blood glucose was detected in other newborns. Immediate skin contact a er birth for 90 minutes' immediate breasfeeding, physical examnination a er 90min without abnormality Test group (1) The values of trace blood glucose in peripheral blood at birth and 1 hour and 2 hours after birth were observed. The normal value of blood glucose within 24 hours after birth was 2.2-7.0 mmol/2.6 mmol/L, and the critical blood glucose value was 2.2 mmol (2) To observe the hospitalization rate of neonatal pediatrics. Neonatal pediatrics hospitalization rate = the number of newborns transferred to neonatal pediatrics due to hypoglycemia/the total number of newborns in the group × 100% (3) To observe the ear temperature of 30 min, 60 min, 90 min, and 120 min after birth, the ear temperature of 30 min, 60 min, 90 min, and 120 min was measured and recorded by the midwife immediately after birth and immediately after birth. The normal ear temperature of the newborn was 36.5~37.5°C (4) The crying of newborns in the two groups was observed. Record one continuous crying, and record the next one if the crying interval is 3 minutes or more The milk yield of the two groups was observed before delivery, immediately after delivery, 15 minutes after early sucking, and 2 hours after delivery. Evaluation criteria of breast milk volume are as follows: 1 min artificial milking method was adopted to evaluate the amount of milk with naked eyes, no milk was extruded as (-), 1 drop or 2 drops was extruded as (+), milk could flow out continuously as (+ +), milk could flow out more or ejected out as (+), (-) and (+) represented insufficient breast milk, and (+) and (+) represented sufficient breast milk 2.4. Statistical Analysis. The statistical analysis of the data in this study uses SPSS24.0 software, and the statistical graphics are drawn by GraphPad Prism 8.0. The measurement data in accordance with normal distribution were presented by mean ± standard deviation, paired sample t-test was employed for intragroup comparison, and independent sample t-test was adopted for intergroup comparison. If not, it is expressed by median (lower quartile to upper quartile), paired sample nonparametric test is employed for intragroup comparison, and independent sample nonparametric test is adopted for intergroup comparison. The grade data were tested by FISHER accurate method. P < 0:05 indicated that there exhibited statistical significance. To Observe the Value of Trace Blood Glucose in Peripheral Blood of Newborns in Both Groups at 1 h and 2 h after Birth. The values of trace blood glucose at 1 hour and 2 hours after birth in the observation group were higher compared to the control group, and the difference between groups was statistically significant (P < 0:05). As indicated in Table 1. To Observe the Hospitalization Rate of Neonatal Pediatrics in Two Groups. The neonatal hospitalization rate in the observation group was lower compared to the control group (P < 0:05), as indicated in Table 2. 3.3. To Observe the Ear Temperature of 30 min, 60 min, 90 min, and 120 min after Birth in Two Groups of Newborns. The ear temperature of 30 min, 60 min, 90 min, and 120 min of newborns in the observation group was higher compared to the control group, and the difference between groups was statistically significant (P < 0:05), as indicated in Table 3. To Observe the Crying of Newborns in Two Groups. The crying frequency of newborns in the observation group was lower compared to the control group, and the difference between groups was statistically significant (P < 0:05), as indicated in Table 4. The Incidence of Postpartum Hemorrhage and Uterine Contraction/Wound Pain Index Were Observed between the Two Groups. The incidence of postpartum hemorrhage in the observation group was lower compared to the control group, and the difference between groups was statistically significant (P < 0:05). The rate of uterine contraction/wound pain index grade 1 in the observation group was higher compared to the control group, and the difference between groups was statistically significant (P < 0:05). The rates of uterine contraction/wound pain index grade 2 and grade 3 in the observation group were lower compared to the control group, and the difference between groups was statistically significant (P < 0:05), as indicated in Tables 5 and 6. 3.6. To Observe the Lactation of the Two Groups 2 Hours after Delivery. The rate of lactation (+++) at 2 hours postpartum in the observation group was higher compared to the control group, and the difference between groups was statistically significant (P < 0:05), as indicated in Table 7. Discussion Gestational diabetes mellitus (GDM) refers to different degrees of abnormal glucose metabolism in women during pregnancy or found for the first time [11]. The maternal blood glucose level of pregnant women with GDM is high, which causes a large amount of glucose to enter the fetus through the placenta, and the fetal blood glucose level also increases [12]. The increase of blood glucose level can stimulate the compensatory proliferation of fetal islet cells and produce more insulin than needed [13]. After the delivery of newborns, the storage of glycogen in the body is insufficient, and the newborns delivered by pregnant women with GDM tend to have heavy body weight, resulting in an increase in glucose metabolism and consumption and a significant increase in the incidence of neonatal hypoglycemia after birth [14]. Hypoglycemia can make the energy source of neonatal brain cell metabolism insufficient, and brain metabolism and other physiological activities cannot be carried out normally [15]. If neonatal hypoglycemia cannot be treated in time, it may cause irreversible brain damage [16,17]. Maintaining the dynamic stability of blood glucose is an important physiological link in the transition from fetus to newborn. Studies have indicated that the lower the blood BioMed Research International sugar and the longer the duration, the greater the likelihood of brain injury [18]. Regardless of gestational age and age, whole blood glucose < 2:2 mmol/L was diagnosed as neonatal hypoglycemia, and when blood glucose < 2:6 mmol/L, it was critical hypoglycemia, and clinical intervention was needed. Whether brain injury occurs in children with hypoglycemia is not only related to the previously recognized lowest blood glucose level and duration of hypoglycemia but also related to blood glucose variation indexes such as maximum blood glucose fluctuation, standard deviation of blood glucose level, and average blood glucose fluctuation [19]. It is suggested that the blood glucose level should be closely monitored in the process of clinical treatment of hypoglycemia, and the speed of correcting hypoglycemia should not be too fast and the range should not be too high. Early intervention in clinical work is expected to reduce a series of hazards caused by hypoglycemia [20]. In 2019, a study on the integration of clinical nursing practice guidelines for GDM pointed out that the blood glucose level of newborns should continue to be higher than 2.5 mmol/L within 24 hours after birth. Studies have indicated that the most obvious decrease in blood glucose in newborns with IDMS is within 0.5 hours after birth, and it is also an important period to prevent hypoglycemia [21]. Blood glucose in 2-48 hours after birth of IDMS indicates an upward trend. Transient hypoglycemia is common during this period, and because newborns can tolerate hypoglycemia to a certain extent, there may be a lack of typical clinical symptoms in the early stage [22]. According to the American Academy of Pediatrics, blood glucose monitoring should be performed routinely in children with high-risk factors for hypoglycemia. A study found that the study monitored the blood sugar of newborns who were not fed for 3 hours and found that the blood sugar was the lowest within 1-2 hours and increased after 3 hours [23]. Studies have indicated that there is a tendency to maintain blood glucose in the normal range with the continuous improvement of the mechanism of blood glucose regulation by IDMS and reasonable feeding. At present, China has not formulated guidelines for the management of neonatal hypoglycemia, and the diagnosis of neonatal hypoglycemia basically follows the previous clinical or epidemiological definition [24]. When newborns develop critical hypoglycemia, they are often fed with 10% glucose or formula. The clinical management model of neonatal hypoglycemia has a standardized management mode, which is limited to monitoring blood glucose value on the basis of early breastfeeding, while 10%GS or formula intervention is the main line [25]. The improved scheme cannot reduce the incidence of postnatal 30 min hypoglycemia, which has some limitations. Studies have indicated that the osmotic concentration of 10% GS > 400 mmol/L will damage the intestinal mucosa of newborns and lead to necrotizing enterocolitis; although the osmotic concentration of formula is less than 400 mol/L, the rapid increment will also lead to intestinal mucosal damage of newborns [26]. Breast milk is considered to be the most ideal food to prevent neonatal hypoglycemia. Continuous breastfeeding and frequent sucking of newborns can promote their sympa-thetic adrenaline stress response, which in turn leads to an increase in blood sugar. The Breastfeeding Promotion Strategy Guide points out that healthy newborns and their mothers and babies with early sucking and early contact have higher 75-180 min blood glucose levels after birth [27]. SSC nursing is a new nursing mode and it is when a newborn is born or shortly after birth, causing it to touch naked on the mother's naked chest. The peripheral receptors of newborns are stimulated by skin contact, and the signals are transmitted by neurons to the tactile, motor sensory system, and vestibule, respond when the signal is received, effectively adjust the movement state of the human body, and relieve stress response [28]. Rongjin et al. have studied the SSC proposed which is helpful to reduce the incidence of neonatal hypoglycemia [29]. At present, the first clinical breastfeeding is completed within 30-60 min after birth, or crawling is used to complete the first early sucking, but it is not clear whether early breastfeeding of IDMS can be completed in advance. Therefore, this paper carried out a study to explore the value of early comprehensive intervention of skin contact combined with breastfeeding on improving early blood glucose in neonates with GDM. The results of this study indicated that after the early comprehensive intervention of skin contact combined with breastfeeding, the peripheral blood glucose at 1 hour and 2 hours after birth, and the ear temperature of 30 min, 60 min, 90 min, and 120 min after birth, lactation rate (++ +) at 2 hours postpartum was higher compared to routine intervention (P < 0:05). Neonatal hospitalization rate, neonatal crying frequency, incidence of postpartum hemorrhage, uterine contraction/wound pain index grade 2 and grade 3 were lower than those of routine intervention (P < 0:05) [30]. It is proved that the early comprehensive intervention of skin contact combined with breastfeeding is helpful to stabilize the early blood glucose and body temperature of newborns, promote breast milk secretion, avoid neonatal crying and uneasiness, reduce the hospitalization rate of neonatal hypoglycemia, and reduce the incidence of postpartum hemorrhage. This is mainly because a number of research reports pointed out that hypothermia is an important risk factor for neonatal hypoglycemia [30,31]. Due to the great changes in environmental temperature after birth, newborns need a large amount of sugar energy to be converted into calories to maintain body temperature, and energy consumption has increased significantly. However, traditional breastfeeding and skin contact are usually performed after the completion of routine neonatal nursing operations, such as umbilical cord ligation and measurement of body mass and body length. At this stage, the maternal glucose supply is interrupted, the stimulation of the new environment triggers anaerobic metabolism and consumes a lot of blood sugar, and the energy consumption is also increased by the change of ambient temperature and crying after delivery [31]. On the one hand, timely and good temperature transfer can effectively increase newborn body temperature, avoid neonatal crying, reduce extra energy consumption, and avoid hypoglycemia [32][33][34]. Meanwhile, it can increase the time of neonatal breastfeeding and the success rate of breastfeeding and increase the amount of 6 BioMed Research International maternal lactation [35]. The studies of Feng et al. indicated that newborns were in a quiet awakening or sleeping state most of the time during mother-to-infant skin contact, and the number of crying and the duration of crying decreased significantly. It plays an effective positive role in regulating the behavior of newborns [36]. The qualitative research of Yufeng indicates that the pregnant woman is full of satisfaction and pleasure when the mother is in complete skin contact, which is a kind of physical and mental pleasant experience, and establishes the mother's sense of responsibility as soon as possible and promotes maternal spontaneous breastfeeding and soothing newborn behavior [37]. On the other hand, biological rearing, also known as semilying lactation, was first recorded by Klaus and Kennel (1976) [38,39]. Babies release innate reflexes and mothers release instinctive behavior. Both mothers and newborns can be in a relatively relaxed position during breastfeeding. The newborn lies prone on the mother's bare chest, and the body is close to the mother and can be supported and fixed from the mother's body and surrounding environment. The mother can hold the newborn frequently for a long time, find the feeding reaction of the newborn in time, breastfeed in time, and provide energy, which can well solve the problems of energy consumption, crying, and not keeping warm in time [40]. And it can enhance the effective sucking rate of newborns, promote uterine contraction, reduce the amount of postpartum bleeding as well as the feeling of uterine contraction/wound pain, and strengthen the overall intervention effect. There are some limitations in this study. First, the sample size of this study is not large and it is a singlecenter study, so bias is inevitable. In future research, we will carry out multicenter, large-sample prospective studies, or more valuable conclusions can be drawn. Conclusively, early comprehensive intervention of skin contact combined with breastfeeding can significantly increase the early blood glucose of newborns with GDM, effectively promote the occurrence of early hypoglycemia of GDM newborns, avoid a series of serious complications caused by excessive fluctuation of blood sugar, promote the stability of vital signs of newborns, reduce the hospitalization rate of newborns, improve the success rate of breastfeeding, reduce uterine contraction/wound pain, and reduce the incidence of postpartum hemorrhage. Data Availability No data were used to support this study. Conflicts of Interest The authors declare that they have no conflicts of interest.
2022-08-03T15:12:00.655Z
2022-07-31T00:00:00.000
{ "year": 2022, "sha1": "a8b3649395c97655a2fe33e65defb62115a37d74", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2022/2305239.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58ea5de9c9becd1647e39e019509023b6a5ed634", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
139951507
pes2o/s2orc
v3-fos-license
Quartz Tuning Fork Tip Length Influence on the Shear Force Imaging in Liquids Shear force detection method has been widely used for regulating the tip-sample distance in scanning near-field optical microscopy (SNOM) since its implementation. In this technique, an optical fibre tip is fixed to a vibrating element, which is excited near its mechanical resonance frequency in such a way that the tip vibrates parallel (shear force mode) or perpendicular tapping mode) to the sample surface. As the tip approaches close to the sample surface, the tip-sample interaction forces experienced by the sensor are detected as a shift in resonance frequency and changes in vibrational amplitude and phase. These signals are used to maintain the tip-sample distance of the SNOM during scanning. The imaging performance of the dynamic distance control is mainly determined by the response sensitivity of the sensor to the tip-sample interaction forces. It has been shown that the minimum detectable force has an inverse dependence on the square root of the sensor’s quality factor Q and dithering frequency [1, 2]. Sometimes the sensor has to be immersed in an aqueous medium. In these cases, the force detection sensitivity is considerably degraded owing to capillary and viscous damping. Therefore, imaging of such samples becomes difficult, especially in the shear force detection mode. In this work, we investigate the possibility to use a tuning-fork with a long (5 mm) tungsten tip to operate in liquid environments. We have investigated the damping of the tip oscillation as a function of tip length in an air and in a liquid at higher modes of quartz tuning fork (QTF). The tuning-fork and a tip oscillation amplitude-frequency dependence in air and a tip in contact with surface were investigated using Finite Element Modeling and experimentally. Introduction Shear force detection method has been widely used for regulating the tip-sample distance in scanning near-field optical microscopy (SNOM) since its implementation.In this technique, an optical fibre tip is fixed to a vibrating element, which is excited near its mechanical resonance frequency in such a way that the tip vibrates parallel (shear force mode) or perpendicular tapping mode) to the sample surface.As the tip approaches close to the sample surface, the tip-sample interaction forces experienced by the sensor are detected as a shift in resonance frequency and changes in vibrational amplitude and phase.These signals are used to maintain the tip-sample distance of the SNOM during scanning.The imaging performance of the dynamic distance control is mainly determined by the response sensitivity of the sensor to the tip-sample interaction forces.It has been shown that the minimum detectable force has an inverse dependence on the square root of the sensor's quality factor Q and dithering frequency [1,2].Sometimes the sensor has to be immersed in an aqueous medium.In these cases, the force detection sensitivity is considerably degraded owing to capillary and viscous damping.Therefore, imaging of such samples becomes difficult, especially in the shear force detection mode. In this work, we investigate the possibility to use a tuning-fork with a long (5 mm) tungsten tip to operate in liquid environments.We have investigated the damping of the tip oscillation as a function of tip length in an air and in a liquid at higher modes of quartz tuning fork (QTF). The tuning-fork and a tip oscillation amplitude-frequency dependence in air and a tip in contact with surface were investigated using Finite Element Modeling and experimentally. Theoretical aspects of electrochemical etching tungsten tips Under the influence of a high electrostatic field, electrons can be emitted from the surface of a tungsten tip into electrolyte.This purely quantum mechanical phenomenon is called field emission, and it proved to be a proper tool for the characterization of our tips since it allowed us to gain some useful insights into the sharpness of tungsten tips of our probe [3]. The electrostatic field at the tip surface increases at the regions of high curvature.If voltage V is applied to asphere of radius R, the electrostatic field F at its surface is given by: The tip is composed of a sphere attached to a conical shank.The electrostatic field that results at the apex surface will be lower than predicted by Eq. ( 1) because the presence of the shank yields a modification in the field lines distribution.The surface electrostatic field for a tip-shaped object can then be approximated by: where: k is the field reduction factor and R is the tip radius.From Eq. ( 2), it can be seen that for a given applied voltage and field reduction factor, a smaller tip apex radius will yield a higher electrostatic field.From the Folver Nordheim equations [4] we also know that a higher electrostatic field will produce a higher current density of field emission (A/mm²): for μ (eV) and φ (eV) and F (V/mm).Here μ is a Fermi level and φ is a metal's work function.Then: where: α has a value comprised of varying 0 and 1.If Eq. ( 4) multiplied by the total field emitting area α (mm²) and Eq. ( 2) is used to express the electrostatic field F, the total field emitted current I (A) is obtained as a function of the tip radius R (mm): Eq. ( 5) provides us with an explicit description of the relationship between tip sharpness and field emission data: for any tip voltage, the smaller the tip apex radius, the higher the total field emitted current.This principle is the essential factor behind our quick tip sharpness test. The tip radius can be estimated by a direct application of the Fowler-Northeim theory on field emission. If Eq. ( 5) is divided by V² and the natural logarithm on both sides, are obtained Eq. ( 6) can yield a straight line of that will yield a straight line of slope   This particular way of graphing field emission data is called a Fowler -Northeim plot and simply requires to record the total field emission current for various applied voltages.Provided that the values of φ, α and k are known, it is then a straightforward matter to evaluate the tip radius from the slope of a Fowler-Northeim plot [4].The experimental setup permitting the acquisition of such field emission data is quite simple. The use of electrochemical etching has enabled us to achieve the sharpness of the tungsten tip where the sphere diameter is smaller than 50 μm (Fig. 1).The force sensor for shear force microscope (SFM) has been produced by gluing a sharpened probe to a QTF or a resonator of some other type [2]. The gluing quality is ensured by the usage of a specially constructed x, y, z coordinate manipulator placed in the observation zone of microscope.The QTF and sensor are shown in Fig. 2. Modelling the quartz tuning fork The probe must be within a few nanometers of the surface because of the interaction volume of the highly localized field enhancement.In order to guarantee this tipsample interaction distance, an SFM is used.This is clearly seen in Fig. 3. SFM have the capacity to hold a probe just several nanometers above the surface of the sample.The amplitude and phase of a quartz tuning fork are monitored by high -speed electronics, while the Z-piezo is controlled by a Proportional -Integral -Differential (PID) feedback loop.The function of the feedback loop is to maintain a constant height of the imaging probe above the sample topography while scanning takes place.Fig. 3 The drawing of the tuning fork SFM The use of the imaging probes that are mounted onto quartz tuning forks make angstrom level resolutions possible [5].The sample is placed under the probe to ensure maintenance of laser illumination and probe alignment during the raster scanning of the sample. In this work, a finite element model (FEM) of the QTF is suggested.In order to make a precise 3D model, the dimensions of the QTF are set in accordance with the measurements carried out in an optical microscope.In contrast to commercial AFM sensors where cantilevers present a rotation in relation to the X coordinate, the tuning fork model rotates in relation to the Z coordinate; thus, the width (T) and the thickness (W) are defined in opposite ways compared to AFM cantilevers (Fig. 4.) As it has been mention previously, the QTF is based on piezoelectric phenomena.Therefore, linear piezoelectricity equations of elasticity have to be defined and coupled to the electrostatic charge by means of piezoelectric constants [6]: where:   The different physical properties of quartz have been extensively studied [7,8].However, there are insignificant differences in elastic, piezoelectric, and dielectric permittivity matrices among various reported study results.In the present work, properties from [8] have been selected because more complex measurement techniques based on resonance ultrasound spectroscopy are used to determine the values. The piezoelectric constant matrix, which allows the structural and electrical behaviour of the material to be coupled, is defined as follows: The piezoelectric behaviour of the material is achieved by using the element type SOLID226 in ANSYS.The model is defined in MKS units in line with ANSYS nomenclature. Results and discussion Below is a general view of the QTF (Fig. 5).The difference in tuning fork oscillation resonance frequency with the free tip in air and with the tip in contact with surface was used as a measure to evaluate the tip length influence on shear force sensor sensivity to acting forces (Fig. 6 -9).The tungsten tip diameter 120 µm was used for simulations.The simulation results have shown not linear dependence between the tip length and quartz tuning fork oscillation amplitude, Q-factor and frequency difference. Simulations results demonstrate that the optimal length have to be selected to get optimal conditions with high Q-factor and optimal length to get maximum frquency shift. We investigated the possibility to use long probes to get image in liquids.Our experiments demonstrate that the etched tip quality is good enough to perform meas-urements in liquids with a resolution less than 50 nm.The possibility to apply our system to biological samples of Chinese hamster ovary (CHO) cells imaging were investigated.The high quality of images was demonstrated (Fig. 10). These results suggest that the etched tungsten tips would be a promising tool in for biological applications of shear force and atomic force microscopy with nanopositioning systems [9] and can provide valuable information for understanding some biological problems.2. The experimental results suggest that tuning fork sensor with long long tip would be a promising set-up for the measurements and imaging in liquid. 3. The optimal length of the tip has to be selected for experimental conditions to get a maximum sensitivity.The most sensitive tuning fork at 32 kHz is 2 mm longoscillation resonance frequency with the free tip in air and with the tip in contact with surface are 2.83 kHz.The most sensitive tuning fork at 190 kHz is 0.4 mm long -oscillation resonance frequency with the free tip in air and with the tip in contact with surface are 2.6 kHz. 4. Positive results of testing measurement with the made sensors for shear force microscopy shows, that the developed sensors can be used for shear force microscopy with the resolution less than 50 nm. Fig. 1 Fig. 1 Optical microscope image of the sharpened probe tip endMonitoring the etching current drawn from tungsten tips as a function of applied voltage is a convenient way to characterize the sharpness of tips of our probe. Fig. 2 Fig. 2 Photograph of the QTF probe: ageneral view; bsensor used for shear force microscope probe when is parallel to quartz fork iD is electric flux density vector,
2019-04-30T13:06:52.417Z
2018-02-21T00:00:00.000
{ "year": 2018, "sha1": "11849dc2c72c9128880290dec165014906aba9ec", "oa_license": "CCBY", "oa_url": "http://mechanika.ktu.lt/index.php/Mech/article/download/19059/9292", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "11849dc2c72c9128880290dec165014906aba9ec", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
41989985
pes2o/s2orc
v3-fos-license
How sensitive and specific are Mri features of sacroiliitis for diagnosis of spondyloartHritis in patients witH inflaMMatory back pain ? OBJECTIVE To determine the sensitivity and specificity of MRI features of sacroiliitis in spondyloarthritis (SpA). MATERIALS AND METHODS A retrospective study reviewed MRI of the sacroiliac (SI) joints in 517 patients with inflammatory back pain. Sensitivity, specificity, positive and negative likelihood ratios of active and structural lesions of sacroiliitis with final clinical diagnosis as golden standard was calculated. RESULTS MRI showed active inflammation in 42% of patients (bone marrow oedema (BMO) (41.5%), capsulitis (3.3%), enthesitis (2.5%)) and structural changes in 48.8% of patients (erosion (25%), fat infiltration (31.6%), sclerosis (32%) and ankylosis (7.6%)). BMO was the MRI feature with the highest sensitivity (65.1%) for diagnosis of SpA. Capsulitis (99%), enthesitis (98.4%), ankylosis (97.4%) and erosion (94.8%) had a high specificity for diagnosis of SpA, whereas BMO (74.3%), sclerosis (75.8%) and fat infiltration (84.0%) were less specific. BMO concomitant with enthesitis, capsulitis or erosions increased the specificity. Concomitant presence of BMO and sclerosis or fat infiltration decreased the specificity. CONCLUSION BMO is moderately sensitive and specific for diagnosis of SpA in patients with inflammatory back pain. BMO concomitant with enthesitis, capsulitis, ankylosis or erosion increases the specificity. Concomitant fat infiltration or sclerosis decreases the specificity for diagnosis of SpA. Of all lesions, erosion had by far the highest positive likelihood ratio for diagnosis of SpA. Spondyloarthritis (SpA) is a group of inflammatory joint conditions sharing common clinical, radiological, genetic and even therapeutic characteristics and are often associated with the presence of human leukocyte antigen (HLA)-B27 (1-5). Early diagnosis of SpA has gained significance for rheumatologists as new medical treatment options have become available (6).MRI of the SI joints shows active inflammatory and structural lesions of sacroiliitis long before radiographic changes become evident (7,8). Active sacroiliitis on MRI is an important classification criterion.MRI is regarded 'positive' for sacroiliitis if bone marrow oedema (BMO) is clearly present (8).Other MRI features representing active inflammation of the SI joint such as enthesitis or capsulitis alone are not sufficient for a 'positive' MRI for sacroiliitis.Structural lesions in sacroiliitis are sclerosis, fat infiltration, erosion and finally ankylosis (8). These active and structural changes of the SI joints, in particular the presence of BMO, may also be present in patients presenting with non-rheumatological entities that clinically mimic sacroiliitis such as degenerative disease, lumbosacral the study (185 (35.8%) men, 332 (64.2%) women), with a median age of 33.3 years (range 16.1-44.9). The aim of this study is to determine the sensitivity and specificity of MRI features of sacroiliitis in SpA.Also, we sought to find if BMO concomitant with other MRI features of sacroiliitis may increase the sensitivity and specificity for diagnosis of SpA. Materials and methods The retrospective study was approved by the institutional ethics committee. Study group All participants, aged 16 Conclusion: bMo is moderately sensitive and specific for diagnosis of spa in patients with inflammatory back pain. bMo concomitant with enthesitis, capsulitis, ankylosis or erosion increases the specificity. concomitant fat infiltration or sclerosis decreases the specificity for diagnosis of spa. of all lesions, erosion had by far the highest positive likelihood ratio for diagnosis of spa. keywords: spine, Mr -arthritis. From: 1.Department of 1. Radiology and Medical Imaging, 2. Rheumatology, Ghent University Hospital, Ghent, Belgium, 3. Department of Radiology, University of Alberta Hospital, Edmonton, Alberta, Canada.Address for correspondence: Dr L. Jans, M.D., Ph.D., Dpt of Radiology, Ghent University Hospital, De Pintelaan 185, 9000 Gent, Belgium.E-mail: lennartjans@hotmail.com specificity of 74.6%.Presence of enthesitis and capsulitis had a low sensitivity, but a very high specificity for diagnosis of SpA.Diagnostic utility of active and structural lesions of sacroiliitis for diagnosis of SpA were determined by calculating sensitivity, specificity, positive and negative likelihood ratio for consensus reader data with final clinical diagnosis as golden standard. Image review The MR images were reviewed for the presence of active or structural lesions of sacroiliitis by 2 musculoskeletal radiologists with 10 and 14 years of experience (LJ,WH), who were blinded to clinical and other imaging findings. -Active lesions of sacroiliitis included BMO, enthesitis and capsulitis (8).BMO was defined as 'positive' for sacroiliitis if high T2 FS/STIR signal of the ilium or sacrum typically located periarticularly was present.If there is one signal (lesion) only, this should be present on at least two slices.If there is more than one signal on a single slice, one slice was considered to be enough (8).Representative images are presented in Figs.1-6. Statistical analysis Statistical analysis was performed using software package SPSS 20.0 for Windows (SPSS, Chicago, IL, USA).Basic descriptive statistics were performed where appropriate.I). BMO concomitant with ankylosis, erosion and fat infiltration had a moderate higher positive likelihood ratio (LR+) for diagnosis of SpA, compared to the LR+ of concomitant presence of enthesitis and sclerosis (Table II). discussion Early assessment of inflammation in the course of SpA gains importance in the light of new therapeutic strategies.The value of MRI of the SI joints is well established and resulted in a definition of a 'positive MRI' for sacroiliitis (8,11,12).In the current ASAS criteria of axial SpA MRI plays an important role: a 'positive MRI' is a key criterion for disease classification (8,13,14). However, in daily radiology practice it remains challenging to decide if demonstrated BMO is sufficient to really represent active sacroiliitis as seen in SpA.In our study we found a moderate sensitivity (65%) and -in this context more importantly-only a moderate specificity (74%) of BMO of the SI joints for diagnosis of SpA.These figures are similar to the figures published in the paper by Weber et al., who also reported on the limitations of using BMO as a single criterion in the current ASAS definition of a 'positive' MRI (16).We confirmed their findings, and also showed that concomitant fat infiltration and sclerosis decrease the diagnostic value of MRI.This is not surprising, since these features are commonly seen in degeneration, with mechanical back pain mimicking inflammation in these patients (15).Our study showed that the concomitant presence of other features of active sacroiliitis such as enthesitis or capsulitis, indicating an ongoing true inflammatory process, increased the specificity.The presence of structural lesions also contributes substantially to the diagnostic utility of MRI for diagnosis of SpA.The concomitant presence of erosions and ankylosis -both hallmarks of the disease -also increased the specificity (8).On the other hand, the concomitant presence of fat infiltration and sclerosis decreased the specificity for diagnosis of SpA, which may be explained by the presence of the same MRI features in degenerative processes (15). Weber et al stressed the importance of erosions as an MRI feature of SpA (16)(17).In our study we found that of all lesions of the SI joints, the presence of erosion had the highest LR+ (10.4) for diagnosis of SpA.This finding confirms that erosion could play a role as a new or additional criterion in future classification systems, especially as it improves the specificity of MRI of the SI joints for diagnosis of SpA. In our study we also found a very high specificity of enthesitis (98%) and capsulitis (99%) for diagnosis of SpA.However, as these lesion were not commonly seen (in 4% and 6% respectively), they might not be as useful as classification criteria.Still, the presence of enthesitis or capsulitis may be particularly helpful in equivocal cases or may indicate that active inflammation is present at a certain stage in the disease process. There are some limitations to our study.Firstly, MRI was the only imaging technique, without correlation with radiography or CT.Secondly, the patient population represented referrals from a single tertiary center; referral patterns for sacroiliitis Enthesitis was defined high T2 FS/STIR signal of an enthesis representing BMO, soft tissue inflammation or joint/bursal fluid.Capsulitis is seen as high signal on STIR images involving the anterior or posterior capsule of the SI joint (7, 8).-Structural lesions of sacroiliitis consisted of sclerosis, fat infiltration, erosion and ankylosis.Sclerosis is depicted as low signal subchondral bands extending at least 5 mm by all sequences.Erosions are low T1 signal bone defects at the joint margin that may occur throughout the cartilaginous joint compartment.Fat deposition depicts as periarticular high T1 signal in the bone.Ankylosis of the SI joint appears as low signal by all sequences bridging the SI joint and may show high T1 signal if bone marrow is present (8). Fig. 1 . Fig. 1. -BMO and capsulitis of the SI joint in a 22-year-old male with spondyloarthritis.Semicoronal STIR image shows BMO (arrows) of the left SI joint with concomitant posterior capsulitis (short arrows). Fig. 2 . Fig. 2. -BMO and enthesitis of the SI joint in a 24-year-old female with spondylo arthritis.(A-B) Semicoronal STIR images show BMO (arrows) of the right SI joint with concomitant enthesitis of the left retroarticular ligaments (long arrow). Fig. 3 . Fig. 3. -BMO and concomitant erosion of the SI joint in a 26-year-old female with spondyloarthritis. A. Semicoronal T1-weighted MR image shows extensive erosions of the SI joints (arrows).B. Semicoronal STIR image shows BMO of both SI joints (arrows). Fig. 4 . Fig. 4. -BMO and ankylosis of the SI joint in a 33-year-old male with spondyloarthritis.Semicoronal (A) T1-weighted and (B) STIR MR images show ankylosis of the left SI joint (short arrows) with concomitant BMO of the right SI joint (long arrows). Fig. 5 . Fig. 5. -BMO and fat infiltration of the SI joint in a 44-year-old female with degenerative joint changes.Semicoronal (A) T1-weighted and (B) STIR MR images show fat infiltration of the SI joints (short arrows) with concomitant BMO of the left SI joint (long arrow). Fig. 6 . Fig. 6. -BMO and sclerosis of the SI joint in a 44-year-old female with degenerative joint changes.Semicoronal (A) T1-weighted and (B) STIR MR images show sclerosis of the right SI joint (short arrows) with concomitant BMO of both SI joints (long arrows). Table I . -The sensitivity, specificity, positive and negative likelihood ratios of MRI features of sacroiliitis for the diagnosis of SpA.
2018-04-03T05:13:32.138Z
2014-07-01T00:00:00.000
{ "year": 2014, "sha1": "26c97b5a0eb8fef49c5ccb1c302b45e76818d32d", "oa_license": "CCBY", "oa_url": "http://www.jbsr.be/articles/10.5334/jbr-btr.94/galley/91/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "26c97b5a0eb8fef49c5ccb1c302b45e76818d32d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235354382
pes2o/s2orc
v3-fos-license
Purification and characterization of the receptor‐binding domain of SARS‐CoV‐2 spike protein from Escherichia coli Abstract SARS‐CoV‐2 is responsible for a disruptive worldwide viral pandemic, and renders a severe respiratory disease known as COVID‐19. Spike protein of SARS‐CoV‐2 mediates viral entry into host cells by binding ACE2 through the receptor‐binding domain (RBD). RBD is an important target for development of virus inhibitors, neutralizing antibodies, and vaccines. RBD expressed in mammalian cells suffers from low expression yield and high cost. E. coli is a popular host for protein expression, which has the advantage of easy scalability with low cost. However, RBD expressed by E. coli (RBD‐1) lacks the glycosylation, and its antigenic epitopes may not be sufficiently exposed. In the present study, RBD‐1 was expressed by E. coli and purified by a Ni Sepharose Fast Flow column. RBD‐1 was structurally characterized and compared with RBD expressed by the HEK293 cells (RBD‐2). The secondary structure and tertiary structure of RBD‐1 were largely maintained without glycosylation. In particular, the major β‐sheet content of RBD‐1 was almost unaltered. RBD‐1 could strongly bind ACE2 with a dissociation constant (KD) of 2.98 × 10–8 M. Thus, RBD‐1 was expected to apply in the vaccine development, screening drugs and virus test kit. SARS-CoV-2 contains four structural proteins, including spike (S), envelope, membrane and nucleocapsid proteins [10,11]. S protein plays the most important roles in viral attachment, fusion, and entry. It serves as a target for development of antibodies, entry inhibitors and vaccines [12,13]. In addition, it can bind to a host receptor, angiotensinconverting enzyme 2 (ACE2) through the receptor-binding domain (RBD) [14]. Thus, RBD of SARS-CoV-2 S protein is an appealing antigen for vaccine development, which elicits most neutralizing antibodies during SARS-CoV-2 infection [15]. Moreover, an advantage of the RBD-based vaccine is its ability to minimize the host immunopotentiation [16]. RBD contains 220 amino acid residues with nine cysteine residues and two N-glycosylation sites [17]. The apparent molecular mass of RBD was determined to be ∼34 kDa, whereas that of the RBD amino acid sequence alone was ∼27 kDa [17]. N-glycosylation and Oglycosylation were both observed by analysis of RBD [18]. The glycan moieties have a relevant role in the in vivo protein folding process, stability, and immunogenicity of RBD [19]. RBD has a central twisted antiparallel β-sheet formed by five strands decorated with secondary structure elements and loops [20]. RBD has been expressed in the eukaryotic cell expression systems, including baculovirus-insect cells, yeast cells, and mammalian cells (e.g. HEK293 cells and CHO cells) [21,22]. However, the low expression yield and high cost of RBD could not meet the need of the therapeutics and vaccine development [23]. Bacterial expression systems for heterologous protein expression have the advantages of easy use, low cost, short generation times, and scalability [24]. Particularly, E. coli is one of the most popular bacterial hosts for heterologous protein expression [25]. Some studies suggest that RBD of SARS-CoV S protein expressed by E. coli could provide protective immunity [26,27]. Moreover, RBD (N318-V510) of SARS-CoV-2 S protein was expressed by E. coli and has been used as a cost-effective antigen for worldwide serological testing [28]. However, RBD of SARS-CoV-2 expressed in E. coli (RBD-1) lacks the disulfide bond formation and glycosylation. Thus, the expression and protein folding of RBD-1 are different to the one expressed in HEK293 cells (RBD-2). Because E. coli is cost-efficient for expression of RBD, it is of interest to investigate the structure and ACE2-binding affinity of RBD-1. In the present study, RBD-1 was expressed by E. coli, renatured, and purified by a Ni Sepharose Fast Flow column. The structure of RBD-1 was characterized and its binding affinity to ACE2 was measured. The properties of RBD-1 were compared with RBD-2 to evaluate its effectiveness for the vaccine design, screening drugs, therapeutics, and virus test kit. PRACTICAL APPLICATION The receptor-binding domain (RBD) of SARS-CoV-2 spike protein is a vital target in mediating the entry of the virus into the host cells. RBD was mainly expressed by the eukaryotic cell expression systems, which suffered from the high cost and low expression level of RBD derived from the eukaryotic cell limit its practical application. In contrast, RBD expressed by E. coli (RBD-1) has the advantage of easy scalability with low cost. In the present study, RBD-1 was expressed, purified, and structurally characterized. The secondary and tertiary structure of RBD-1 was largely maintained. The binding affinity of RBD-1 to angiotensinconverting enzyme 2 (ACE2) was measured with a dissociation constant (K D ) of 2.98 × 10 -8 M. Our study is of practical significance for SARS-CoV-2 vaccine development, drug screening and virus test kit. Expression of RBD in HEK293 cells Recombinant RBD of SARS-CoV-2 S protein (RBD-2) was expressed by the HEK293 cells [17] and gifted from Academy of Military Medical Sciences (Beijing, China). Construction of RBD expression vector The coding DNA sequence of RBD of SARS-CoV-2 S protein (amino acid 330-583 of S protein) was from strain Wuhan-Hu-1 (Genbank Acc. No. 045512.2). The sequence was amplified by PCR and cloned into the pET28a bacterial expression vector with a C-terminal His-tag. Correct DNA sequences were confirmed by restriction digestion and DNA sequence analysis. The resulting plasmid was transformed into E. coli host BL-21 (DE3) cells. The expression strain was screened out by streaking the plate and cultured in 5 mL LB medium. Expression of RBD-1 in E. coli The expression strain was incubated in the LB medium (50 mL) for overnight and then in the LB medium (1 L) containing 0.1% kanamycin. The resultant culture was incubated with vigorous aeration at 37 • C until the strain density was up to the absorbance range of 0.6-0.8 at 600 nm. IPTG was added into the culture at a final concentration of 0.5 mM and then incubated at 37 • C for 3 h. Then, the cells were harvested by centrifugation at 4000 rpm for 35 min. The pellet was resuspended in 50 mM Tris-HCl buffer (pH 8.0) and sonicated in an ice bath. Finally, the inclusion bodies containing RBD were collected by centrifugation at 8000 rpm for 30 min. Refolding and purification of RBD-1 The SDS-PAGE analysis RBD-1 and RBD-2 were both analyzed by SDS-PAGE, using an 12% SDS-polyacrylamide gel under a reducing (5% β-mercaptoethanol, v/v) condition. The molecular weights of RBD-1 and RBD-2 were estimated with suitable markers. The gel was stained with Coomassie blue R-250. Size exclusion chromatography analysis Size exclusion chromatography (SEC) analysis of RBD-1 and RBD-2 was carried out on an analytical Superdex 200 column (1 cm × 30 cm, GE Healthcare, USA). The column was equilibrated and eluted with PB buffer (pH 7.4) at a constant flow rate of 0.5 mL/min. RBD-1 and RBD-2 were both injected at a volume of 100 μL and a protein concentration of 1 mg/mL. The effluent was detected at 280 nm. Circular dichroism spectroscopy The secondary structures of RBD-1 and RBD-2 were measured by the far-UV circular dichroism (CD) spectroscopy, using a Jasco-810 spectropolarimeter (Jasco, Japan). The spectra were recorded between 260 and 190 nm. The two proteins were both at a protein concentration of 0.3 mg/mL in PB buffer (pH 7.4). The spectra represented an average of three individual scans and were corrected for absorbance caused by the buffer. A quartz cuvette with a 0.1 cm path length was used for the measurement. The secondary structure data of the two proteins were obtained by the structure fitting software that has been built-into the JASCO J-810 spectropolarimeter. Fluorescence spectroscopy Intrinsic fluorescence measurements of RBD-1 and RBD-2 were carried out on a Hitachi F-4500 fluorescence spectropolarimeter (Hitachi, Japan) at room temperature. The intrinsic emission spectra were recorded from 300 to 400 nm and excited at 280 nm. A quartz cuvette with a 1.0 cm path length was used for the measurement. Excitation slit width was 5 nm and emission slit width was 10 nm. RBD-1 and RBD-2 were both at a protein concentration of 0.1 mg/mL in PB buffer (pH 7.4). RBD-1 and RBD-2 were both determined by extrinsic fluorescence measurement, using ANS as the fluorescence probe. RBD-1 and RBD-2 were mixed with 10fold molar ANS at a protein concentration of 0.1 mg/mL in PB buffer (pH 7.4), respectively. The emission spectra were recorded from 400 nm to 650 nm at a scan rate of 120 nm/min and excited at 350 nm. A quartz cuvette with a 1.0 cm path length was used for the measurement. Excitation slit width was 5 nm and emission slit width was 10 nm. 2.10 Fourier-transform infrared spectroscopy RBD-1 and RBD-2 were both analyzed by Fouriertransform infrared spectroscopy (FT-IR), using a Nicolet iS50 FT-IR spectrometer (Thermo Fisher, USA). The two proteins were both lyophilized and mixed with KBr, followed by pressing into a thin tablet. The FT-IR spectra were scanned from 4000 to 400 cm -1 , and the interferograms were presented as transmittance. 2.11 Interaction of RBD-1 with ACE2 The interaction between RBD-1 and ACE2 was determined by surface plasmon resonance (SPR), using a BIAcore T100 instrument (GE Healthcare, Boston, USA). ACE2 was immobilized on the surface of a CM5 sensor chip, using the cross-linker EDC and NHS. RBD-1 and RBD-2 were measured at a flow rate of 10 μL/min with an association phase of 1 min after injection, followed by dissociation for 3 min. RBD-1 and RBD-2 were diluted by the filtered and degassed PBST running buffer (PBS buffer containing 0.5% Tween-20, pH 7.4). RBD-1 and RBD-2 were both injected at a concentration range of 12.5-200 nM. Dose-response data were collected in the single cycle format [29]. The data were automatically fitted to the 1:1 (Langmuir) binding model for both kinetics and steady-state affinity. RBD-1 was induced and expressed in E. coli BL-21 cells. The cells were harvested and sonicated to obtain the cellfree extract for SDS-PAGE analysis. As shown in Figure 1, a major electrophoresis band corresponding to ∼27 kDa was observed in the culture with IPTG induction (Lane 3), which was absent in the culture without IPTG induction (Lane 2). Thus, RBD-1 was expressed in E. coli upon induction with IPTG. Minor RBD-1 was also detected in the supernatant (Lane 4). However, RBD-1 was predominantly in the form of insoluble inclusion bodies (Lane 5). The inclusion bodies were solubilized and renatured for purification. Purification of RBD-1 The inclusion bodies containing RBD-1 were subsequently dissolved, renatured and purified by a Ni Sepharose Fast Flow column (0.5 cm × 5.0 cm). As shown in Figure 2, SDS-PAGE analysis As shown in Figure 3A, RBD-1 (Lane 2) exhibited a single electrophoresis band, corresponding to an apparent Mw of ∼27.0 kDa. RBD-2 (Lane 3) displayed a single electrophoresis band with a slower migration rate, corresponding to an apparent Mw of ∼35 kDa. This indicated the high purity of RBD-1 and RBD-2. The larger Mw of RBD-2 was due to glycosylation of the protein expressed by the HEK-293 cells. Size exclusion chromatography analysis RBD-1 and RBD-2 were both analyzed by an analytical Superdex 200 column (1 cm×30 cm), based on size exclusion chromatography. As shown in Figure 3B, RBD-1 was eluted as a single and narrow elution peak at 20.4 mL. In contrast, RBD-2 was eluted as a single and broad elution peak at 18.0 mL. As compared with RBD-1, the leftshifted peak of RBD-2 was due to the glycosylation of RBD. Fluorescence spectroscopy Fluorescence spectroscopy was used to monitor the tertiary structure of RBD-1 and RBD-2. As shown in Figure 4A, RBD-2 exhibited a single broad peak with a maximum emission fluorescence intensity at 328 nm when the excitation wavelength was at 280 nm. In contrast, the intrinsic fluorescence intensity of RBD-1 was much lower than that of RBD-2. Presumably, the disulfide bonds may be formed for the Cys-abundant RBD-1 and rendered the quenching effect on the fluorescence intensity of RBD-1 [29]. The maximum fluorescence intensities of RBD-2 were F I G U R E 5 Structural characterization of RBD-1. The CD spectra (A) were recorded from 260 to 190 nm. FT-IR spectra (B) was obtained from 4000 to 500 cm -1 both at 328 nm. This indicated that the tertiary structure of RBD-1 was comparable to that of RBD-2. RBD-1 and RBD-2 were mixed with a fluorescent probe (ANS), respectively. The hydrophobicity of RBD-1 and RBD-2 was thus determined by measuring the extrinsic fluorescence (EF) intensities of the mixtures. As shown in Figure 4B, RBD-2 showed a single and broad peak with a maximum EF intensity at 516 nm. In contrast, RBD-1 displayed a slightly lower peak with a maximum EF intensity at 518 nm. This suggested that glycosylation expressed in the HEK-293 cells made the hydrophobic residues of RBD slightly less exposed to the surrounding environment. This lowered the EF intensity and rendered that the intensity peak of RBD-1 was red-shifted. Circular dichroism spectroscopy The secondary structures of RBD-1 and RBD-2 were measured by far-UV CD spectroscopy. As shown in Figure 5A, the CD spectra of RBD-1 and RBD-2 both displayed a single negative maximum ellipticity around 208 nm, a characteristic of β-sheet structure. The negative maximum ellipticity of RBD-2 around 208 nm was slightly larger than that of RBD-1. In particular, the β-sheet percentages of RBD-1 and RBD-2 were 73.7% and 74.7%, respectively. The random and α-helix contents of RBD-1 were 20.7% and 4.6%, respectively. The random and α-helix contents of RBD-2 were 22.9% and 2.0%, respectively. This indicated that the secondary structure of RBD-2 was slightly less dense than that of RBD-1. FT-IR spectroscopy FT-IR spectroscopy was used to structurally analyze RBD-1 and RBD-2. As shown in Figure 5B, the spectra of the two proteins both displayed the characteristic peaks at 949 cm -1 (O H wagging), 1159 cm -1 (C O stretching), 1467 cm -1 (C O vibration) and 1699 cm -1 (C O stretching). RBD-1 showed two sharp peaks at 3443 and 3346 cm -1 , which assigned to N H stretching in primary aliphatic amines. As compared with RBD-1, the peak at 3443 cm -1 was shifted to 3552 cm -1 (O-H stretching) for RBD-2. Moreover, the signal of RBD-2 at 949 cm -1 was stronger than that of RBD-1. This indicated that the hydroxyl content of RBD-1 was lower than that of RBD-2, possibly related to the absence of glycosylation in RBD-1. SPR analysis of RBD-1 and RBD-2 The kinetics for the binding of two proteins to ACE2 was determined by the SPR analysis. The kinetic curves were shown in Figure 6. The association rate (k a ), dissociation rate (k d ), and dissociation constant (K D , k d /k a ) were calculated from the kinetic curves. ACE2-RBD-2 showed a k a of 4.51 × 10 5 M -1 s -1 , a k d of 1.92 × 10 -3 s -1 , and a K D of 4.25 × 10 -9 M ( Figure 6B). In contrast, ACE2-RBD-1 showed a k a of 2.81 × 10 5 M -1 s -1 , a k d of 8.38 × 10 -3 s -1 , and a K D of 2.98 × 10 -8 M ( Figure 6A). Thus, the kinetic result suggested that RBD-1 could strongly bind with ACE2. DISCUSSION At present, several SARS-CoV-2 vaccine candidates have been developed, using RBD of SARS-CoV-2 S protein as the antigen. The RBD antigen was mainly expressed by the eukaryotic cell expression systems, such as mammalian cells, insect cells, and yeast cells. However, RBD-2 derived from the eukaryotic cells suffered from high cost and low expression level. As compared with RBD-2, RBD-1 expressed by E. coli was greatly scalable at a low cost. In F I G U R E 6 SPR analysis of the interaction between RBD and ACE2. RBD-1 (A) and RBD-2 (B) were measured at a flow rate of 10 μL/min with an association phase of 1 min after injection, followed by dissociation for 3 min. Dose-response data were collected in the single cycle format the present study, the product yield of RBD-1 was 13.3 mg/L by flask culture. In contrast, the product yield of RBD-2 in mammalian cells (HEK-293T) was 5 mg/L by cell culture [30]. However, the effectiveness of RBD-1 derived from E. coli was necessary to be evaluated. In the present study, RBD-1 was expressed by E. coli in the form of inclusion bodies. The inclusion bodies were dissolved in 6 M guanidine hydrochloride and renatured in the presence of 0.5 M l-arginine. The renatured RBD was purified by a Ni Sepharose Fast Flow column. RBD was thus obtained by one-step affinity purification process with high purity and high purification yield. As compared with RBD expressed by the HEK293 cells (RBD-2), RBD expressed by E. coli (RBD-1) lacks the glycosylation. The absence of glycosylation correlated with the decreased size of RBD-1, which may shorten the serum duration of RBD. If RBD was formulated and used as nanoparticle vaccines, the size effect of RBD could be neglected. The structure of RBD-1 was investigated by CD, fluorescence and FT-IR. CD suggested that the major β-sheet content of RBD-1 was almost unaltered. Fluorescence spectroscopy suggested that the tertiary structure of RBD-1 was slightly changed. FT-IR spectroscopy revealed that RBD-1 lacked the glycosylation with a slight structural alteration. SPR analysis suggested that RBD-1 could strongly bind ACE2 with a K D of 2.98 × 10 -8 M. Thus, our study was of practical significance to ensure the effectiveness of RBD for clinical application. In summary, RBD-1 was successfully expressed in E. coli and purified by Ni affinity chromatography. RBD-1 was structurally characterized and compared with RBD expressed by the HEK293 cells (RBD-2). The secondary structure and tertiary structure of RBD-1 were largely maintained. Moreover, RBD-1 could strongly bind ACE2 with a K D of 2.98 × 10 -8 M. Thus, RBD-1 was expected to apply in the vaccine design, screening drugs and virus test kit.
2021-05-12T13:13:29.423Z
2021-05-07T00:00:00.000
{ "year": 2021, "sha1": "c3c3a6e13c347cf450588f8485d23a05f30ca033", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/elsc.202000106", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29195e43589b1a541e2335d426187d6cca20f43f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
43939951
pes2o/s2orc
v3-fos-license
Linear Phase Sharp Transition BPF to Detect Noninvasive Maternal and Fetal Heart Rate Fetal heart rate (FHR) detection can be monitored using either direct fetal scalp electrode recording (invasive) or by indirect noninvasive technique. Weeks before delivery, the invasive method poses a risk factor to the fetus, while the latter provides accurate fetal ECG (FECG) information which can help diagnose fetal's well-being. Our technique employs variable order linear phase sharp transition (LPST) FIR band-pass filter which shows improved stopband attenuation at higher filter orders. The fetal frequency fiduciary edges form the band edges of the filter characterized by varying amounts of overlap of maternal ECG (MECG) spectrum. The one with the minimum maternal spectrum overlap was found to be optimum with no power line interference and maximum fetal heart beats being detected. The improved filtering is reflected in the enhancement of the performance of the fetal QRS detector (FQRS). The improvement has also occurred in fetal heart rate obtained using our algorithm which is in close agreement with the true reference (i.e., invasive fetal scalp ECG). The performance parameters of the FQRS detector such as sensitivity (Se), positive predictive value (PPV), and accuracy (F1) were found to improve even for lower filter order. The same technique was extended to evaluate maternal QRS detector (MQRS) and found to yield satisfactory maternal heart rate (MHR) results. Introduction All over the world, approximately 2.65 million stillbirths occur during pregnancy or labour especially in developing countries giving rise to the need for effective monitoring techniques with regard to fetal health [1]. FHR monitoring is important to recognize pathologic conditions, typically asphyxia, with sufficient warning so as to enable intervention by the clinician [2]. It is a screening modulus of the fetus to detect problems in advance that could result in irreversible neurological damage, even fetal death [3]. More than 85 percent of all live births in the United States undergo electronic fetal monitoring [4]. Indeed, fetal health monitoring has a significant importance in obstetrical procedures and is now widely accepted as the need of the hour. With electronic fetal monitoring (EFM), the following expectations came: provision of accurate FECG information, information of value in diagnosing fetal distress, prevention of fetal death or morbidity, and superiority over many methods. The fetus can be monitored electronically by two methods: direct and indirect. In the direct invasive method, the FHR is measured by a scalp electrode which is attached to the fetal scalp by means of a coiled electrode [5]. In the indirect electronic monitoring method, such as using ultrasound Doppler principle with uterine contractions, FHR can be monitored but not as precisely as the direct invasive FECG [2]. However, the invasive procedure has a risk of infection to the fetus. The ultrasound transducer with the coupling gel is applied to the mother's abdomen where fetal heart response is best detected. During this measurement, the pulsations of the maternal aorta could be detected and erroneously considered as FHR [6]. The noninvasive FECG (NIFECG) by indirect method can therefore be used to overcome all these limitations by placing the surface electrodes such as the 12 lead ECG electrodes over the maternal abdomen [7]. The maternal thoracic ECG can also be taken as a reference signal along with the abdominal ECG (aECG). A study was conducted during labour of about 75 pregnant mothers, to check the accuracy and reliability of the NIFECG [8]. It was found that the NIFECG recordings were more accurate than the conventional external methods in comparison with the direct fetal scalp recordings. Therefore with EFM, using NIFECG recordings is the most suitable for long-term ambulatory use [9]. 1.1. Fetal Physiology. Fetal distress and fetal asphyxia are too broad and vague to be applied with any precision to clinical situations. Uncertainty regarding the diagnosis based on interpretation of FHR patterns has given rise to reassuring and nonreassuring patterns [6]. Reassuring FHR patterns include the normal baseline FHR, moderate accelerations, and variability with fetal movement assuring the wellbeing of the fetus, whereas nonreassuring FHR patterns include tachycardia (FHR baseline more than 160 bpm), bradycardia (FHR baseline is less than 110 bpm) [10], prolonged decelerations, and so on. The severe and prolonged hypoxia induces a prolonged fall in FHR [11]. Some causes of fetal bradycardia include congenital heart block [12]. The baseline fetal heart rate is greater than 160 bpm. It is reported that the fetal body movements affect variability [13], while the baseline variability increases with advancing gestation [14]. It is also reported that reduced variability with decelerations is associated with fetal acidemia [15]. An FHR who has a consistently flat baseline with no variability and without decelerations within the normal baseline rate limit range may reflect a neurological damage in the fetus [16]. It is important that we understand the parameters of the fetal ECG signal which further aids the analysis of the fetal status and the EFM, during pregnancy or labour [1]. Previous Methods to Extract NIFECG from aECG. Researchers in the biomedical field in the areas of fetal extraction and fetal analysis have done extensive work in the last two decades. A large number of detection and extraction techniques are used to separate the FECG from the maternal ECG. Independent Component Analysis (ICA) is a statistical technique, and its accuracy is based on using a large number of noise-free maternal abdominal input channels. For ICA to function correctly, certain conditions such as (i) the number of measured signals should be equal to or greater than the number of input sources, (ii) it should possess an instantaneous linear time invariant mixing matrix, and (iii) the input sources should be statistically independent [17]. In our application, the first two do not fully satisfy because the artifacts increase the number of sources and fetal movement leads to a noninvariant mixing matrix [18]. ANFIS is an adaptive noise cancellation system which requires an additional maternal thoracic ECG signal as reference signal for adaptive cancellation of the maternal ECG. This method depends on how well one trains the ANFIS structure to compute the estimated output FECG signal [19,20]. Subtraction method is a simple technique, but the major challenge is that the amplitude of the thoracic MECG rarely matches the scale of the MECG present in the aECG signal [21]. As a result, correct FECG is hardly ever obtained. Wavelet transform method can be used for preprocessing stage to suppress noise, and maternal cancellation can be done by template subtraction [22]. Correlation techniques are not very efficient and effective in the detection of nonstationary signals like ECG [23]. As IIR filtering being a nonlinear method [24], our technique of using linear phase sharp transition FIR filter is less complex and does not involve many iterations as the filter response is specified precisely over the entire band. With the knowledge of the fiduciary edges and the fair estimate of the spectral overlap of maternal and fetal ECG, accurate FHR and maternal heart rate can be obtained. Our technique being single channel lead makes it very convenient and comfortable for a maternal home care for long-term monitoring. The amplitude of MECG is at least 10 times larger than that of FECG, and the signal-to-noise ratio (SNR) of the MECG is less than unity [25]. The separation of these two ECG signals becomes even more complex as the maternal and fetal ECG overlap both in time and frequency domain [3]. The aECG signal is further affected with the low frequency noise of 0.5 Hz [26] due to baseline wandering where the amplitude of the ECG signal also varies by about 15% with respiration [2]. The other noise which affect the aECG are 50/60 Hz power line interference (PLI) [27], electromyographic noise in the uterus and the muscles of the abdomen, and other motion artefacts [28]. Linear Phase Sharp Transition Band-Pass Filters. The location of the passband and its width are critical factors that affect the design implementation of the filter. Usually sharp transition BPF are realised by the composite filters of highpass and low-pass filters as done in [29]. The interpolated FIR technique was used wherein, every time the centre frequency of the BPF was changed, the two composite filters had to be redesigned. Approximate expressions for the value of interpolating factor and filter hardware required are derived which minimizes the total arithmetic hardware used which is the overall band-pass realization. The two-branch structure realization is more efficient than the conventional direct form realization with an increase in the number of delays [30]. Another technique of symmetric BPF is given in [31]. The filter is implemented by two parallel, quadrature filter branches with each branch derived from a complex modulation of a low-pass-interpolated FIR filter by complex exponentials. The input signal is modulated with a sine/ cosine sequence in order to achieve the desired frequency shift in the frequency response. In the current work, we propose a two-stage method to obtain noninvasive FQRS from a single lead maternal abdominal signal by first applying the designated fiduciary edges to the linear phase sharp transition (LPST) FIR bandpass filter with a sharp transition width. In the second stage, an FQRS detector is used based on Pan Tomkins QRS detector algorithm [32]. The QRS detector module consists of an amplitude squarer, moving window integrator, moving average filter, and an adaptive threshold process which effectively detects the fetal R-peaks. Methodology Using LPST FIR Band-Pass Filter Our proposed technique of integrated LPST FIR band-pass filter has low computational complexity. Normally, the composite low-pass and high-pass filters are to be redesigned for any change in the centre frequency and pass band width for the desired BPF. Our proposed technique for the integrated BPF design departs from this approach completely. It eliminates the need for a centre frequency and the fixed passband width as it is used in [33]. Our design of LPST FIR BPF allows the user to set the cutoff frequencies for a narrow pass band width. It also incorporates a very linear sharp transition width while reducing the effects due to Gibb's phenomenon and thereby reducing the passband ripple of the filter [34]. LPST FIR BPF Model and Design. In this section, the design of a LPST FIR BPF is presented. For the proposed filter model, the five regions of the filter response are modelled using trigonometric functions of frequency. The filter model magnitude response H(ω) is shown in Figure 1. The frequency responses for the five regions are listed as follows: Using (1), the filter design parameters k 1 , k 2, k 3, k 4 , and k 5 for the five regions of the band-pass filter are evaluated and listed as follows: , , where ω s1 and ω s2 are the stopband edge frequencies while ω p1 and ω p2 are the passband edge frequencies. δ s and δ p are the stopband attenuation and passband ripple, respectively, while m 1 , m 3 , and m 5 are integers. The impulse response coefficients h(n) for the FIR bandpass filter are obtained from [35] h n = 1 π π 0 H ω sin kω dω 3 Substituting the magnitude response H(ω) for each region from (1) and (2) in (3), we get h n = δ s 4π Equation (4) is the expression for the band-pass filter model impulse response h(n). We can choose the effective pass band width (ω p2~ωp1 ) such that (ω s1~ωp1 ) = (ω s2~ωp2 ), as small as possible for sharp transition of passband edge. Expression for Frequency Response Coefficients of a LPST FIR Filter. Let h(n) given by (4) be the impulse response coefficients of an N point linear phase FIR filter [36] where 0 ≤ n ≤ N − 1 and In the case of antisymmetric response with N odd [37], the frequency response of the FIR band-pass filter is given as This response is most suitable for the proposed bandpass filter as H(0) = 0 and H(π) = 0. If we refer to (5), k is an integer for N odd. Other constraints are as follows: (i) In (2), k ≠ k 1 , k ≠ k 3 , and k ≠ k 5 and (ii) k 1 , k 3 , and k 5 should not be integers. However, k 2 and k 4 do not have any constraints. 2.3. Fetal Frequency Spectrum. In our experiment, to extract the QRS of the MECG and FECG from online Physionet databases [38], we used (i) Abdominal and Direct Fetal Electrocardiogram Database (adfecgdb) which provides abdominal ECG recordings (channels 2 to 5) for 5 minutes each from five different subjects during the 38-41-week gestation period [39,40]. In addition, for each subject, a simultaneously recorded scalp or direct fetal ECG record (channel 1) is a golden reference in the evaluations to be made on the respective records. (ii) The Non-Invasive Fetal Electrocardiogram Database (nifecgdb) provides 55 records of different lengths from a single subject taken from the 20th week of pregnancy [41]. Channels 1 and 2 represent maternal thoracic ECG signals while channels 3 to 6 are abdominal ECG recordings with only MQRS reference annotations. The Q-R-S fiducial edges of the thoracic MQRS and the invasive FQRS signals were obtained for each record. The fast Fourier transform (FFT) was obtained for the above records, an average frequency range for MQRS was found to be 10-34 Hz while the average FQRS spectrum was 20-56 Hz. FHR varies with gestation age, ranges from 70 beats per minute (bpm) at four weeks to 175 bpm at 12 weeks and further decreases to a range of 110 to 160 bpm at full term [42]. The FECG bandwidth ranges from 0.05 to 100 Hz [2] with an average value of 140 bpm. However, in comparison, the maternal bpm normally ranges from 50 to 210 bpm with an average of 80 or 89 bpm [42]. We assumed the maternal beats per minute range to be 70-100 bpm (1.166 min -1.666 max bps) and the fetal beats per minute range to be 110-140 bpm (1.833 min -2.333 max bps). The minimum and maximum fetal-to-maternal (f/m) frequency ratios are obtained to compute the average f/m frequency ratio from (8) and (9). From (11), we get the lower fiduciary edge of FQRS to be of 27 Hz which will remove all the low frequency noise including baseline wander frequencies and upper fiduciary edge of 48 Hz which will remove the 50 Hz and its PLI harmonics along with the high frequency noise [27]. The fetal QRS frequency band spectrum can also be further narrowed down so as to avoid the frequency band overlap of MECG and FECG. Accordingly, the upper fiduciary edge of the fetal QRS spectrum is chosen to be 49 Hz or 98π rad/sec. The lower fiduciary edge of the fetal spectrum is set to 35 Hz or 70π rad/sec, since the upper MQRS edge is reported to be approximately 35 Hz [42]. This results in a fetal pass band width of 14 Hz or 28π rad/sec. FQRS Detector. To obtain the FQRS from the band-pass filtered signal, we tried looking at various algorithms including the peak-finding logic using the Hilbert transform [43]. We proposed a simple QRS detection algorithm which is based on the Pan Tomkins algorithm [32]. The modified FQRS detector comprises of four stages: (i) amplitude squarer, (ii) moving window integrator, (iii) moving average filter, and (iv) adaptive threshold. The filtered FECG signal from the LPST FIR BPF is given to the amplitude squaring stage wherein the signal is squared point by point. This nonlinear process enables the high frequency fetal R-peak signals to be further enlarged and minimizes the other lower frequency components. Further, we used a moving window integrator with a sampling frequency (fs) of 1 KHz. This integrator effectively summed the area under the squared waveform over a fixed window interval, advanced one sample interval at a time. The width of the moving window was set to 75 sample interval for FQRS detection while a window of 152 samples wide was adjusted for MQRS detection. A too large window can merge the QRS-integrated waveform and T wave, where as if the window is too narrow, a QRS complex could produce several peaks at the output stage [32]. Additionally, a moving average filter was also used which smoothened the integrated signal and compute a single fetal R-peak. Based on the algorithm in [44], an adaptive threshold is automatically generated to adjust to float above the unwanted noise peaks. Initially, the signal peak value is adjusted manually as per the amplitude of each record [44]. The fetal R-R interval (Δn) is calculated as (n i+1 − n i ) where n i is the time index corresponding to the ith computed fetal R-peak at the output of the FQRS detector (i = 1, 2… where i is an integer). The FHR is computed for each record using FHR bpm = f s × 60 Δn 12 Synthesis of the LPST FIR Band-Pass Filters. The LPST FIR band-pass filter was implemented using (7). The following FQRS band-pass fiduciary edge cutoff frequencies (rad/sec) were substituted as per Figure 1: ω s1 = 70π, ω p1 = 72π, ω p2 = 96π, and ω s2 = 98π. Also stop band and passband ripple δ s = δ p = 0 01. Equal transition width at both ends was chosen for the pass band to be 2π rad/sec or 1 Hz. The measurement of the magnitude response of the band-pass filters is compared in Tables 1 and 2 along with the filter design specifications. ±10 bpm of their corresponding reference measurement. The true reference, namely, scalp fetal R-peak annotations from each record of the Physionet database, was compared with our experimental measured values which was implemented using Matlab toolbox. For example, we evaluated our algorithm for the adfecgdb database for the one-minute record (r08) of channel 4 and found the TP = 132, FN = 1, and FP = 0. The sensitivity, PPV, and F1 were obtained to be 99.24%, 100%, and 99.61%, respectively. The average FHR values for the true reference and algorithm FHR were computed to be 132.09 bpm and 132.59 bpm, respectively. Figure 2 illustrates the true reference FHR bpm plotted with our algorithm-based fetal heart rate variability (FHRV) for record r08. The dotted lines indicate the ±10 bpm tolerance assumed in our case with respect to the true reference FHR trace. It was seen that the difference between the reference FHR and algorithm FHR for most R-peaks was less than the ±8 bpm. Discussions We designed a LPST FIR band-pass filter such that the magnitude H(ω) in the passband and stopband are not constant but inserted a small amount of ripple of 0.01 in the stopband as well as passband so that Paley-Wiener criterion is not violated [46]. The FIR filter was designed for sharp transition width (ω s − ω c ) of 1 Hz or 2π rad/sec. The magnitude responses of the proposed band-pass filter are shown in Figures 3(a)-3(c). Table 3 depicts the performance of the filter for various filter orders (N). There is a reduction of Gibb's phenomenon with these filter designs. For conventional FIR sharp transition filters, the peak passband ripple due to Gibb's phenomenon is about 18% [34,46]. In our proposed LPST FIR band-pass filters, the passband losses are quite low as can be seen from Table 2. It can also be seen that the stopband attenuation surpasses the design specification at higher orders and the passband ripple decreases for higher filter order as seen from Table 3. The sampling rate N = 1001 is much higher than the Nyquist rate (approximately 200 Hz) and is chosen to improve the quality of the extracted FECG. Various filter orders (N = 201, 501, 1001, 2001, and 5001) were implemented to check the performance of the filters as shown in Figure 3(d). These filters are unlike the classical filters in that they possess a narrow stopband and/or passband and also sharp transition regions. The magnitude response, the linear plot, and the magnified view of the BPF are shown in Figures 3(a)-3(c), respectively, with the filter order N equal to 1001. As seen from Figure 4, the average transition width approaches the design specifications at higher orders. The performance curves of Se, PPV, and F 1 are highly linear in the range of filter orders (N) from 2001 to 5001 as seen in Figure 5. This improvement may be due to better filtering at higher order. The direct fetal scalp ECG is the standard reference FECG signal (channel one) as shown in Figure 6(a). The raw maternal aECG signals were taken from channel 4-record r08 of the adfecgdb database as shown in Figure 6(b). The frequency spectrum of the signal which passed through BPF filters (frequencies between 35 Hz and 49 Hz) is shown in Figure 7(a). The band-pass filtering effectively gives us the required frequency spectrum of the FECG, which can be seen in the time domain plot in Figure 7 When FQRS signal is passed through an amplitude squarer, the predefined positive peaks are prominently amplified as shown in Figure 8(a). Figure 8(b) shows the moving window integrator which integrates this signal with a selected window size, effectively picking the correct fetal R-peak indices. An illustration from Figure 8(b) shows that the time indices (n) for the first two detected fetal R-peaks are 3155 and 2709, respectively, which are above the adaptive threshold value. As shown in Figure 8(c), the FHR at these n i=1 and n i=2 are computed to be 134.52 bpm using (12). Among the four types of fetal frequency fiduciary edges of the BPF, type 1 (27 Hz-53 Hz) will absorb some of the PLI in the ECG record, whereas type 2 (27 Hz-48 Hz) avoids PLI unlike type 1 but has a partial overlap spectrum of maternal ECG. Similarly, type 3 (35 Hz-53 Hz) will again have PLI problem but has no maternal spectrum overlap. Finally, the type 4 (35 Hz to 48 Hz) can be considered optimum since the maternal spectrum overlap and PLI are absent. In spite of narrowing the spectrum in this case, there are no missing Figure 10: Sensitivity, positive predictive value, and accuracy for all the four channels of the 5 adfecgdb records. fetal beats. The illustration of the true reference FHR plotted with our algorithm computed FHR for the four sets of fetal frequency fiduciary edges of the BPF is shown in Figure 9. The FQRS detection performance parameters, Se, PPV, and F 1 , were calculated for all the four channels for each of the 5 adfecgdb records using the type 4 fetal frequency fiduciary edges as shown in Figure 10. It was observed that all the above three parameters were 100% for channel 4 of records r01 and r08. The missed fetal R-peaks (FN) were seen in some channels of records r04, r07, and r10, while the falsely identified fetal R-peaks (FP) were the least in most of the records. It is found that this technique can be extended to detect maternal heart rate merely by changing the fiduciary edges of the BPF to ω s1 = 10π and ω s2 = 40π as shown in Figure 11. An illustration of the adfecgdb record r01 (channel 3) detected TP = 89, FN = 3, and FP = 0 to compute Se, PPV, and F 1 to be 96.74%, 100%, and 98.34%, respectively, as shown in Figure 11(e). Similarly, the QRS detection algorithm was tested for the MHR using the Physionet nifecgdb database for all 55 records with 3 to 4 channels each. It was observed that the MHR for all the four aECG channels for most records closely matched the MQRS reference annotations. Abdominal signals from channels 5 and 6 of records such as ecgca416, ecgca597, ecgca649, ecgca771, ecgca848, and ecgca986 displayed a large percentage error difference of computed MHR bpm value as compared with the reference MHR due to the degradation of the acquired aECG signals as seen in Figure 12. Conclusion In this paper, we described a technique of fetal heart rate detection performed noninvasively. This technique was implemented using a linear phase sharp transition FIR band-pass filter. We considered four types of fetal frequency fiduciary edges characterized by varying amounts of overlap with maternal ECG spectrum. Type 4 was found to be optimum with no PLI, no maternal spectrum overlap, and no fetal beats missed. It is found that increasing the filter order has improved the average transition bandwidth, passband ripple, and stop band attenuation of the filter. The fetal Rpeaks generated by our algorithm were compared with the scalp fetal R-peak annotations from the Physionet databases. The algorithm-generated fetal R-peaks were found to be in close agreement with each other including the average FHR values of the true reference and algorithm FHR. Similarly, other performance indices such as Se, PPV, and F 1 were found to have promising results, even for lower filter orders. The same technique was successfully extended to maternal heart rate detection.
2018-06-05T04:17:50.060Z
2018-03-29T00:00:00.000
{ "year": 2018, "sha1": "db422a5612ab135f390dc62d868fc4146cc1574d", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jhe/2018/5485728.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74fac9916f85bbe1670a73a64e5723e91a2c437d", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
259491403
pes2o/s2orc
v3-fos-license
Anxiety and Willingness to Communicate in Language Learning: A Case Study This case study attempts to find out the role of individual differences (IDs) in language learning of an advanced Bangladeshi EFL learner who aims to study at a graduate level program in Education at a North American university. A two-phase structured questionnaire interview has been conducted in which phase one comprises 40 open-ended questions with a general structure that attempts to find out the various personal characteristics of the interviewee, namely ‘personal profile,’ ‘learning history,’ linguistic background, etc. In phase two, there are two sub-phases, and the questionnaire in the first sub-phase focuses on four IDs namely ‘personality,’ ‘learning style,’ ‘anxiety,’ and ‘willingness to communicate,’ with a general research question concerning the effects of these IDs on the interviewee. In the second sub-phase, the researcher narrows down the focus of the research question to the two most important IDs influencing the interviewee’s learning of English, namely ‘anxiety’ and ‘willingness to communicate.’ For the second sub-phase of phase two, the researcher uses four quantitative measurement scales, two measuring the interviewee’s anxiety (adapted from Horwitz and Horwitz, 1986) and the other two measuring her willingness to communicate (adapted from Macintyre, 2001) both inside and outside of classrooms. The results show that the interviewee feels very anxious and addled in the classroom whereas she feels quite the opposite outside the classroom. Naturally, her willingness to communicate in the classroom is very low as Introduction This case study attempts to find out the role of individual differences (IDs) in language learning of an advanced Bangladeshi EFL learner who aims to study at a graduate level program in Education at a North American university. The learner will be addressed as Masha (a pseudonym) who is an exchange visitor on a J-visa living in downtown Washington DC with her husband. A two-phase interview was conducted in which phase one comprises a generally structured open-ended questionnaire with 40 questions that attempts to find out "enduring personal characteristics that are assumed to apply to everybody and which people differ by degree" (Dornyei, 2005, p. 644) in relation to the roles they played in Masha's learning of English as a foreign language. Phase two narrows the general focus to more specific ones analyzing the qualitative responses of the phase one interview. In other words, for the phase two interview, only those IDs were chosen which were "consistently shown to correlate strongly with L2 achievement -to a degree that no other SLA variables match" (ibid, 643). For phase two, the researcher used four quantitative measurement scales, two measuring the interviewee's anxiety (adapted from Horwitz and Horwitz, 1986) and the other two measuring her willingness to communicate (adapted from MacIntyre, 2001) both inside and outside of classrooms. Phase one of the interview took place at 4pm on April 8, 2013 at a local restaurant in Washington DC. The researcher came to know about Masha through a friend of his and contacted her via email first to express his interest in interviewing her. He also added that the interview would not be timed so she could take her time in answering the questions but that it would be audio recorded. As Masha agreed, the date was set for the interview. The interview lasted 1.5 hours and was audio recorded in its entirety. The second phase of the interview (also face-to-face) took place on April 14, but was not audio-recorded since that was an attitude questionnaire in which Masha had to choose from a range of responses. L2 learning history and the linguistic environment The interviewee is a 25-year old female from Bangladesh, a country that falls under the outer circle in the three concentric circles as defined by Kachru (1985) in which "English is not the native tongue, but is important for historical reasons and plays a part in the nation's institutions, either as an official language or otherwise." She completed her B. A. in English Language and Literature from a private university in Dhaka, Bangladesh. On completion of her B. A. degree, she came to the United States in August 2011 as a J-2 dependent (a non-immigrant visa) with her husband. She works for a coffee shop in downtown DC where she has to interact with a lot of native and non-native speakers of English every day. In part I of the interview (Personal Profile, Learning History, and Linguistic Background), she identified herself as an advanced user of English as aptly reflected in her IBT TOEFL score (105 out of 120). She spoke Bangla as her native language and also as a medium of learning for the entire pre-tertiary education for 12 years. In other words, the medium of instruction for all the content areas in her school was Bangla, and she had studied English as a subject like other content area courses (e.g., History and Geography) from the very start of her school at the age of five. She was first exposed to an English-speaking learning environment when she enrolled in the B.A. in English Language and Literature program at a university in Dhaka. Till then, she had hardly had any opportunity to practice her English (speaking) beyond classrooms. Moreover, even within the classroom, there was extensive use of L1, which again limited the scope for speaking practice. Shedding light on her academic learning environment for English, she mentioned that she had four one-hour English classes in the primary level and five 1.5-hour English classes at the secondary level. The English classes were mostly limited to reading, writing, and grammar throughout as there was no systematic teaching of listening and speaking. Masha identified the reason for this discriminatory focus as the result of the exam-oriented education system. Since the tests/exams were limited to only reading and writing, teaching was largely geared towards achieving these skills and thereby preparing the learners solely for the test. Listening and speaking were expected to be learned as a by-product of classroom discussion. The textbooks were largely reading-based followed by comprehension questions. Usually the writing tasks used to be integrated into the readings as extension activities. A typical example of her secondary English textbook activity would be a reading comprehension, say, on family planning followed by comprehension questions and controlled writing activities like filling in the blanks with missing information. She was then instructed to write a paragraph on selected issues of the reading passage in question. Also, there were literary texts (mostly short stories and poems) to stimulate "creative thinking." She added that there was not much variation in the class work and homework since both emphasized mostly similar reading and writing activities. In her words, "the homework was mere extensions of the class work." However, despite her limited exposure to English in the academia, she used to watch English cartoons since Grade 1 on various satellite channels in Dhaka. As she reached her sixth grade, she started watching Disney movies regularly. She said that natural exposure to English through visual media was extremely helpful in getting her ears attuned to understanding both British and American English. She Mohammad Mahmudul Haque | Anxiety and Willingness to Communicate in Language Learning: A Case Study said that at the beginning, "it was just a lot of listening, with almost zero speaking" which, according to her, helped to make a solid linguistic as well as pragmatic foundation for her future learning of English. Relev languag learning As Ellis (2008) pointed out, the factors in the study of individual differences "overlap in vague and indeterminate ways,"and it is sometimes impossible to figure out the exact roles of a given factor in relation to the other factors (p. 644). As an attempt to systematize the study of the IDs, Ellis (2008) divided them in terms of 'abilities' (cognitive capabilities for language learning), 'propensities' (cognitive and affective qualities related to language learning), 'learner cognitions about L2 learning,' and 'learner actions.' However, Dornyei (cited in Ellis, 2008) pointed out that it may not always be easy to decide if an ID constitutes 'ability' or a 'propensity.' That is why it may be more sensible, as Ellis (2008) commented, to treat them separately. Following is an individual discussion of the four ID factors, namely learning style, personality, anxiety, and willingness to communicate. Learning Style According to Keefe (1979a), learning styles refer to the characteristics that indicate "how learners perceive, interact with and respond to the learning environment" (p. 4). He also describes it as a "consistent way of functioning" which reflects the "underlying causes of behavior" (p. 4). As for measuring the learning style, there are research instruments that have been borrowed from general psychology, for example, Dunn et al.'s (1991) Productivity Environmental Preference Survey, while others have been specifically designed to investigate language learners, for example, Reid's (1987) Perceptual Learning Style Questionnaire (Ellis 2012, pp. 667-668). Dunn et al.'s (1991) Productivity Environmental Preferences Survey measures learning styles in "four areas: a) preferences for environmental stimuli, b) quality of emotional stimuli, c) orientation towards sociological stimuli, and d) preferences related to physical stimuli" (cited in Ellis, 2012, p. 668). It is designed to showcase preferences pertaining to both personality and learning style. Bailey, Onwuegbuzie, and Daley (2000) employed this questionnaire among 100 French and Spanish first and second semester students studying at a US university. The study reveals that higher achievers prefer informal classroom design as opposed to "receiv[ing] information via kinesthetic mode" (p. 115). On the other hand, Reid's (1987) perceptual learning styles questionnaire was created based on four perceptual learning styles (visual learning, auditory learning, kinesthetic learning, and tactile learning) and two social learning styles (group preferences, individual preferences). She administered the survey on learners of ant personal characteristics influencing CROSSINGS : VOL. 6, 2015 different language backgrounds and found that learners had a general preference for kinesthetic and tactile learning with a negative attitude (both native and nonnative learners) towards group work. However, a modified version of Reid's questionnaire, conducted by Wintergerst, DeCapua and Verna (2003), revealed a contradictory result (learners preferred group work over individual work). The researchers consider time gap and various social learning styles to be responsible for the contradictory results. Ellis (2012, p. 671)) concludes that since learners' preference towards L2 learning approach varies to a great extent, it is almost impossible to choose the best one. He mentions 'flexibility' to be a plausible reason for learners' success, but it lacks real evidence. He adds that it is unlikely that progress will happen in this respect unless and until researchers know what it is that they want to measure. Personality Pervin and John (2001) define 'personality' as an expression of a consistency in the pattern of "feeling, thinking and behaving" (cited in Ellis, 2012, p. 672). Both teachers and learners consider it to be a very important aspect of language learning as evidenced in Griffiths' (1991b) and Naiman et al. 's (1978) study respectively, which shows that teachers consider it to be an important aspect in L2 language learning. As for measuring personality, different language specific questionnaires have been developed to determine "dimensions of personality" like tolerance of ambiguity or risk taking. "Eysenck Personality Questionnaire" or the "Myers Briggs Type indicators" are examples of two types of general questionnaires to identify a learner's personality. Among the many dimensions of personality, the most notable is extraversion/introversion. Two hypotheses have been made for correlating extraversion/ introversion with L2 learning. The first hypothesis, the one which has been widely researched, states that extrovert learners acquire basic interpersonal communication skills better due to the opportunity for more practice leading to a bigger chance of success. The second hypothesis states that cognitive academic language develops more for introvert learners due to their time dedication towards academic reading and writing. Strong's (1983) review of 12 studies revealed that extroversion was at a point of advantage for language acquisition which supports the first hypothesis. However, Dörnyei and Kormos' (2000) study failed to find a positive correlation between language acquisition and extraversion. Dewaele and Furnham (1999), on the other hand, concluded from their study that extrovert learners though may be fluent in both L1 and L2 but it is not necessary for them to be accurate (cited in Ellis,p. 674). Much research has been conducted to prove the validity of the second hypothesis. However, most of those (Busch, 1982;Carell, Prince, and Astika, 1996;Ehrman and Oxford, 1995) have found either an insignificant or a weak relationship (cited in Ellis,p. 674) between extraversion/ introversion and academic proficiency. The Big Five Model, an important theory of personality in psychology, has five dimension of personality (openness to experience, conscientiousness, extraversionintroversion, agreeableness, and neuroticism-emotional stability) (Big Five Personality Test, June 05, 2015). This model has been modified and used by Verhoeven and Vermeer (2002) where it was found that children showing interest in belonging and identifying with their target language speaking peers achieve success in learning (p. 373). Recent studies have seen more success than that of the previous ones in correlating language with personality traits. However, there are limitations like situational dependence of personality, variables like attitude, motivation, situational anxiety influencing the effect of personality, and methodological deficiency. Anxiety The learning situation affects the learning process of both naturalistic and classroom learners. Language, according to Pavlenko (2006b), is an "inherently emotional affair" (cited in Ellis, 2012, p.691). Researchers like Horwitz, and Young (1991), Arnold (1999), and Young (1999) (cited in Ellis, 2012, p. 691) believe anxiety to be SLA's most noticeable affective aspect. The three types of anxiety that are present are -trait anxiety, state anxiety, and situation specific anxiety. Trait anxiety is the tendency of being anxious all the time, whereas state anxiety is what a learner feels in a particular moment as a reaction to a certain situation (Spielberger, 1983, cited in Ellis, 2012. Finally, situation specific anxiety is the apprehension a learner feels in situations like examination, public speaking, etc. (Ellis, 2012, p. 691). Among the many techniques of measuring the correlation between anxiety and achievement diary data, questionnaire responses correlating to achievement, experiments, report of learners' response to language learning condition are mentionable. Spielmann and Radnofsky's (2001) ethnographic studies containing "rich description of learners' reactions to their learning situations" address "three issues: 1) the source of language anxiety, 2) the nature of the relationship between language anxiety and language learning, and 3) how anxiety affects learning" (cited in Ellis, 2012, p. 692). Another notable study, the diary study by Bailey (1983, cited in Ellis, 2012 has the analysis of 11 learners where she found that when learners find themselves more proficient than their peers, their anxiety decreases. She mentioned tests, teacher-student relationships, etc. to be some sources of anxiety. Ellis and Rathbone (1987) found from their study that teachers' questions can be another source of anxiety. However, it is really hard to identify the sources of anxiety because Horwitz (2001), from the review of her studies, revealed that in most of the cases the tasks that were considered "comfortable" by some were considered to be "stressful" by others(p.118). CROSSINGS : VOL. 6, 2015 Among the many instruments of measuring anxiety level Horwitz, Horwitz and Cope's (1986) Foreign Language Classroom Anxiety Scale is notable. Their 33itemed questionnaire tries to relate to the three sources of anxiety (communication apprehension, test, and fear of negative evaluation) for speaking and listening in L2 acquisition (Ellis, 2012, p. 693). On the other hand, Cheng, Horwitz, and Schallert (1999) developed a questionnaire to identify the relationship of reading and writing anxiety with general language anxiety (Ellis, 2012, p. 693). Language learning and anxiety are related to each other and three positions have been identified regarding the relationship between anxiety and language learning. The first position, anxiety facilitates learning, was supported by Eysenck (1979) who said that "low level anxiety" motivates learners to give more effort (Ellis, 2012, p. 694). MacIntyre (2002), Chastain (1975), Kleinmann (1978) assumed a similar position in their studies. The second position, anxiety, has a negative impact on language learning, and was supported by Chastain (1975)and Horwitz (1986) who found a negative correlation between anxiety and grades or marks. Ely (1986a) found that learners having high anxiety levels took less risk.That is, their motivation was negatively affected (Ellis, 2012, p. 694). The third position, language anxiety, the result of difficulty with learning rather than its cause, was supported by a series of studies conducted by Sparks, Ganschow, and Javorsky (2000) which claims that anxiety regarding L2 learning is a result of language difficulties faced by the learners (Ellis, 2012, p. 695). An important model on anxiety and the language learning process was proposed by MacIntyre and Gradner (1991a), known as the developmental model, which tries to relate learners' developmental stage and situation specific learning experiences with learner anxiety. This model justifies Parkinson and Howell-Richardson's (1990) diary studies which revealed that anxiety develops because of learners' "bad learning experience" (Ellis, 2012, p. 695). Elkhafaifi's (2005 study showed that beginner learners had more listening anxiety than intermediate or advanced learners as "anxiety reduces as they develop" (Ellis, 2012, p.695). MacIntyre and Gardner (1991b) developed their model based on their study in which they used video cameras to observe anxiety levels in the three stages (input stage, processing stage, and output stage). They found the anxiety level to be highest just after introducing the video camera. However, gradually, learners overcame the anxiety and compensated it by increasing performance (Ellis, 2012, p. 696). Willingness to communicate Willingness to communicate (WTC), in other words, "the intention to initiate communication given a choice" (MacIntyre, Baker, Clement, and Conrad, 2001) is considered to be a variable that is "determined by other variables" (cited in Ellis, 2012, p. 697). The factors influencing WTC are situation specific (Ellis, 2012, p. 697). One of the prominent studies on this variable was done by Yashima (2002) who Mohammad Mahmudul Haque | Anxiety and Willingness to Communicate in Language Learning: A Case Study illustrated in her study the necessity of knowing what learning a language means in a context before imposing a definition/model developed elsewhere. In other words, the definition and attitudes of language learning should be bottom up as opposed to top-down. Yashima's (2000) study reveals how international posture (general attitude of the international community) figures both as a direct and indirect variable depending on the context (p.62). However, Kang's (2005) study on four Korean adult males learning English where they were paired with native speakers to communicate freely showed no direct relationship between international relationship and WTC (Ellis, 2012, p. 698). As far as learning the language to communicate is concerned, Ellis (2012) related WTC to CLT (Communicative Language Teaching) as he said that learners who are willing to communicate are benefitted from CLT whereas learners who are not willing to communicate learn better from more traditional approaches. Comparing the difference of WTC inside and outside of classes, MacIntyre et al. in their study found WTC to be a "stable, trait like factor" (cited in Ellis, 2012, p. 698), the same both inside and outside the classroom for Anglophone learners of L2 French in Canada. Adding an interesting dimension, Dörnyei and Kormos' (2000) study revealed a relation between WTC and attitude towards the task. They found that learners with a positive attitude towards the task had more willingness to communicate whereas the correlation was close to zero when the learners had a negative attitude towards the task (Ellis, 2012, p. 698). The current study To develop a general understanding of the roles these IDs played in Masha's acquisition of English, the current two-phase study adopts a concatenated approach, or a research-then-theory approach through a structured questionnaire interview with a general research question, "To what extent do learning styles, personality, anxiety, and willingness to communicate account for Masha's L2 achievement?" The questionnaire has 33 questions with separate sections on each of the IDs mentioned above. Analyzing the interviewee's responses to these questions, the researcher narrowed down the focus of the research question to the two most important IDs influencing Masha's learning of English, namely 'anxiety,' and 'willingness to communicate.' As the researcher analyzes the raw data collected through the questionnaires, he found that Masha was mostly an autonomous learner who learned best by working on her own. She found classroom learning boring and frustrating whereas she learned unconsciously through TV or movies and found it fun. She was very conscious about making errors in class, whereas she did not care much about the errors she made outside classes during her spontaneous speech. She also learned through application, so, naturally, she did not find the grammar-focused classroom instructions very engaging, frequently getting distracted. Moreover, being quite a CROSSINGS : VOL. 6, 2015 talkative person outside class, Masha was quite reserved in the classrooms. Finally, although unwilling to communicate in the classroom, Masha was quite enthusiastic about out-of-class communication. Summarizing her responses from the general questionnaire, the researcher could detect a very different behavioral pattern in Masha's attitude inside and outside of classrooms. Based on this finding, he decided to further explore Masha's 'anxiety' and 'willingness to communicate' in and out of the classroom.The research question was revised to the following: Ÿ To what extent do 'anxiety' and 'willingness to communicate' account for Masha's language development in and out of class? The researcher adapted each of the scales into two parts: one part investigating Masha's behavior in the classroom and the other part outside of class. Each of the parts had 10 statements to specifically find out Masha's behavioral differences in and out of class (please see the fully developed scales as appendix B, C, D, and E). In designing both the scales, the same items (for example, her level of confidence in and out of class) were used for the researcher to be able to compare the findings with each other. The researcher adapted a well-organized pattern for the interviewee to feel comfortable in answering the questions. The scale survey results both in 'anxiety' and 'willingness to communicate' strongly correlate with the findings of the general questionnaire interview in that Masha's anxiety level is very high when she learns in class and extremely low (in fact the lowest) during her outside class interactions. Similar results were found with her 'willingness to communicate': very low in class yet, very high outside of classes. Anxiety Masha's response to classroom anxiety is similar to one of the students, namely Monique, in Ellis and Rathbone's (1987) study in which the learner "felt stupid and helpless in class" (cited in Ellis, 2012, p. 692). Also, Eyseneck's findings that low level anxiety can lead to more effect is similar to Masha's since her low anxiety outside the classroom was proved to be facilitative in acquiring English. However, unlike Horwitz, Horwitz, and Cope's (1986) findings, Masha's high anxiety level may not entirely reflect her "apprehension at having to communicate spontaneously" (as cited in Ellis, 2012, p. 692). It is applicable only when she is in the classroom. Willingness to Communicate As presented in McIntyre's (2001) research, WTC is influenced by variables like "communication anxiety" (cited in Ellis, p. 697) which is strongly reflected in Masha's case as communication anxiety in the classroom prevents her from participating actively in the classroom activities. It can also be concluded that given her autonomous and experiential learning style, she did not enjoy the teachercentered deductive presentations of grammar rules. Also, Masha's personality had a decisive effect on her language learning. Being an extrovert outside class and quiet in the class, Masha did not like to spend time in classrooms, and therefore, she did not quite enjoy her "academic success" as noted by Griffith (1991) with regards to introverted learners (cited in Ellis, 2012, p. 674). Conclusion Finally, as the researcher compared his findings to that of his own learning background and IDs, he finds a number of similarities as well as differences. The major similarities are: Ÿ Both of them are the products of teacher-centered pedagogy Ÿ Both of them studied similar textbooks Ÿ Both of them had a high level of anxiety in the classroom Ÿ Both of them were unwilling to communicate in class While observing their differences, the researcher found the following: Ÿ The interviewee had access to satellite TV channels while the researcher did not, and thereby could not avail the opportunity of input flooding Ÿ The interviewee's anxiety level was low outside the class, but the researcher had a high level of anxiety both inside and outside of his class till his tertiary level of education Ÿ The interviewee was very willing to communicate outside the classroom whereas the researcher was not until the later part of his tertiary level of education. Based on the similarities and differences between the researcher and the interviewee, it may be surprising to note that, despite the fact that the researcher did not have much opportunity for input for a long time, and that his anxiety level was pretty high both inside and outside of class, the researcher still managed to learn English and reached an advanced stage of learning. So, it can be concluded that it may be difficult to measure the effect of IDs on language learning since the apparent negative impact of an ID may not be negative and vice-versa. Limitations of the study This study has a few limitations. Firstly, the researcher of the study, being the primary instrument for data collection and analysis, may not have fully overcome the human subjective bias in selecting and organizing issues that he found pertinent. Therefore, though the researcher was conscious about not influencing the study at all, there may have been unconscious attempts to manipulate the subject's answers, thereby affecting the reliability, validity, and generalizability of the study. Secondly, since it is a case study, it may never claim its findings to be truly representative of similar sets of subjects as Hamel (1993) said, "…case stud [ies] [have] basically been faulted for its lack of representativeness" (p. 23).However, the researcher has listened to the tape recorded interview repeatedly to detect and thereby remove any single example of subjective bias to make the findings as objective as possible.
2023-07-11T01:33:08.854Z
2015-08-01T00:00:00.000
{ "year": 2015, "sha1": "08f1569f36767c6a1168669ba6bcb003257f74a0", "oa_license": "CCBY", "oa_url": "https://journals.ulab.edu.bd/index.php/crossings/article/download/223/204", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3c8f175abaef2d7614b2150a90845f4cd5b94a9a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
204148652
pes2o/s2orc
v3-fos-license
The Origin of the Plant Diversity and the Origin of Human Races The article offers a parallel between the unlimited possibilities of plants to adapt and the unlimited possibilities of humans to adapt. While in the Darwin’s evolutional theory, proclaimed in 1859, external influences modify the material body, for Goethe (in 1790) the external influences modify the archetypal form called plants’ spirit. For Goethe, the whole diversity of plants is a result of adapting of this initially created archetypal form, called plant Spirit, to different environmental conditions - adapting the plant’s spirit was taking many different forms. I think this was a genius intuitive insight of Goethe, but as all intuitive insights it was too much ahead of his time to be appreciated. It seems that the humans’ unlimited ability to adapt to different environments has the same origin – it is done through the initially created archetypal form, called human Spirit, which is nonlinear electromagnetic field (NEMF). The quantum origin of this NEMF is the basis of the human extremely flexible adaptation, which is the foundation of the origin of races. The racial features developed as adaptation to the different climatic conditions at different geographical latitudes. However, the unlimited ability of humans to adapt could only be explained if it was done through the quantum features of his NEMF, called human Spirit. Goethe's Concept on the Origin of Plants' Metamorphoses We know that Goethe was a famous German writer, but the fact that Goethe wrote a book on plant metamorphoses remains unknown. Goethe wrote a book On the Metamorphosis of Plants, which was published in 1790. This was 70 years before Darwin. In this book, Goethe asked: "If all plants were not modeled in one pattern, how could I recognize that they are plants?" [1]. According to Goethe, all the variety of plants we enjoy now evolved from a single prototype of plant when adapting to different environmental conditions. In the process of adaptation new forms of plants were developed, and Goethe defined 3 major cycles of expansion and contraction in each evolution. a. The expansion of foliage was followed by contraction into calyx and tracts. b. The expansion of the petals of the corolla was followed by contraction into stamen and stigma and c. The expansion into fruit was followed by contraction into seed. When these 3 cycles of expansions and contractions were completed, the plant was ready to start all over again, but in a slightly modified form. Goethe explained: "Each contraction is withdrawing in order that a higher manifestation of the Spirit may take place" [1]. Thus, while in the Darwin's evolution theory, proclaimed in 1859, external influences modify the organism, for Goethe the external influences modify the archetypal form called plants' spirit. When adapting to different environments, the plant's spirit was taking many different forms [1]. Goethe's regular publisher refused to publish his manuscript on plants telling him that he was a literary man, not a scientist. When the book got finally published in 1790 elsewhere, his manuscript On the Metamorphosis of Plants was completely ignored for 18 years. It took 30 more years before some attention was paid to Goethe's concept [1], but we are still not where Goethe was 230 years ago. This was because Goethe's way of thinking was very much ahead of his time. Goethe published his book 70 years before the Darwin's evolution theory, which spoke of material changes. Goethe spoke of Spirit changes, which we know now to be nonlinear electromagnetic field (NEMF) [2]. and Yang is male). Goethe labeled the tendency of the plant to grow earthward toward the darkness and moist of the soil as female. (In ancient Tao texts the growth toward the Earth is Yin and Yin is female -the Earth itself was always considered female -Mother Earth) [1]. The Greek philosopher (Aristotle (384 -322 B.C.) claimed that in each animal or plant beside the bones, the blood, the nerves, the brain, the flash, there should be an etheric form, which he called Soul, but it was later specified as Spirit [1]. Soul is the unity of body and Spirit. The British physicist Sir Oliver Lodge (185-1940) was convinced that "a whole world lays beyond the physical" and he joined the London Society of Psychic Research to widen his horizon on the non-physical. However, what was non-physical in his time now is the new branch of physics -nonlinear physics, which is capable of explaining the nonlinear electromagnetic field (NEMF) of living humans, animal, plants and even not alive matter [3]. I measured with my patented supersensitive electronic equipment this weak nonlinear electromagnetic field for almost 40 years. This weak NEMF rules and regulates everything in the body from the Subconscious, to which all organs are subordinated. It is done by a Quantum Computer, which operates with the waves of this NEMF, and from the Subconscious rules and regulates everything in the body. Our Mind and emotions are parts of it [4][5][6]. The Human Races -Genetic Manipulation or Adaptation? If all plants originated from one prototype of plant endowed with unlimited ability to adapt and by adapting to different environmental conditions evolved into myriad of different plants, a natural question arises: How did the human races on the planet earth appear? Did the humans evolved into different racial prototypes when adapting to the different climates of planet Earth, just like the plants did? Can I provide a proof for it? Yes, I can. Three Italian genetic specialists were paid by UNESCO for 17 years to travel all over the planet, study the human genes, and map them. They did and they publish their results in the book History and Geography of the Human Genes [7]. Since the dark-skin Africans have the same features (dark skin and small-curls hair) as the Australian aborigines, they assumed they would find common genes regardless how far Australia was from Africa. However, their genetic studies [7] did not find any common genes between the Africans and the Australian aborigines -none, whatsoever. They publish their results without a commentary. However, since they both live at the same geographic latitude, obviously when adapting to the hot climate of this geographic latitude, they developed the same features -dark skin and small curls hair. The Ethiopians of East Africa have the same dark skin and small curls hair as the Australians and the West Africans because they live at the same latitude with hot climate. However, the Ethiopian facial features are different from the West Africans because Ethiopian ancient texts say that one of their tribes, Oromo, came from the Pacific. But since they lived at this latitude with hot climate for many thousands of years, they developed the same features -dark skin and small curls hair. Let us take as an example the Mongoloid race. The Eskimo, who live far to the north, have the same slanted eyes as the Mongoloid race. Probably, the slanted eyes were developed as adaptation to the strong reflection of the sun light from the snow. With time, some of them moved south to present day Mongolia, China, Korea, Japan, but the slanted eyes stayed with them. Let us take as an example the white race. They lived at latitudes between the northern Eskimo and the southern black race. And when adapting to their environment with less sun and cold winters, they developed the features that are specific for them -white skin and narrow nostrils to decrease the flow of cold air. Thus, I truly believe that there was one prototype of humans on Earth at the beginning. The different races developed when adapting to the different climate of the different geographic latitudes and we have genetic studies that proved it [7]. Considering this, racial discrimination does not make any sense because we are the same people. If some of us look different, it is because their ancestors lived at different geographic latitude for thousands of years and adapting to these conditions, they developed different features. Conclusion If I intended to write an article about the origin of races, why did I start with the origin of plants? How does Goethe's concept relate to the origin of races? Can we merge the concept of Goethe about evolution of plants to the evolution of the Human Races? I started with Goethe's concept for the origin of plants because I believe I can apply the concept to the origin of races. The Italian genetic specialists studied the DNA, but Russian scientists found that DNA emits photons and is influenced by photons [8]. If the adaptation is done not by direct modification of DNA, but through the NEMF (called Spirit), the quantum relation to the environment would allow much more sensitive and flexible feedback connection, which would explain our unlimited ability to adapt [9]. Obviously, the photons, which influence DNA are part of our weak NEMF, which rules and regulates everything in the body. This weak NEMF (which Goethe called Spirit), being very sensitive to changes in the environment, is the basis of our unlimited possibility to adapt.
2019-09-26T08:51:09.914Z
2019-07-25T00:00:00.000
{ "year": 2019, "sha1": "c3734ffee6209b148ff62ba3c4d7db171e5f9266", "oa_license": "CCBY", "oa_url": "https://biomedgrid.com/pdf/AJBSR.MS.ID.000782.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4554f17c82625bdd02596d8d83cf014238269c58", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
233941545
pes2o/s2orc
v3-fos-license
Characterization of the Evolution of Noble Metal Particles in a Commercial Three-Way Catalyst: Correlation between Real and Simulated Ageing : Ageing of automotive catalysts is associated to a loss of their functionality and ultimately to a waste of precious resources. For this reason, understanding catalyst ageing phenomena is necessary for the design of long lasting efficient catalysts. The present work has the purpose of studying in depth all the phenomena that occur during ageing, in terms of morphological modification and deactivation of the active materials: precious metal particles and oxidic support. The topic was deeply investigated using specific methodologies (FT-IR, CO chemisorption, FE-SEM) in order to understand the behavior of metals and support, in terms of their surface properties, morphology and dispersion in the washcoat material. A series of commercial catalysts, aged in different conditions, have been analyzed, in order to find correlations between real and simulated ageing conditions. The characterization highlights a series of phenomena linked to the deactivation of the catalysts. Pd nanoparticles undergo a rapid agglomeration, exhibiting a quick loss of dispersion and of active sites with an increase of particles size. The evolution of the support allows highlighting also the contribution of chemical ageing effects. These results were also correlated with performance tests executed on synthetic gas bench, underlining a good correspondence between vehicle and laboratory aged samples and the contribution of chemical poisoning to vehicle aged ones. The collected data are crucial for the development of accelerated laboratory ageing protocols, which are instrumental for the development and testing of long lasting abatement systems. Introduction Automotive catalysts usually work in extremely harsh conditions: at high temperature due to the exothermic reactions involved, with local peaks above 1000-1100 • C, and in a very variable chemical environment. This is particularly true in gasoline-fueled vehicles, where the main system adopted to reduce engine emissions is represented by the three-way catalyst (TWC) [1,2]. This catalytic system is able to perform simultaneous oxidation of CO and unburned hydrocarbons (HC) and reduction of nitrogen oxides (NO x ), thanks to a well-controlled air-to-fuel ratio that keeps the gaseous stream around stoichiometry [3]. These conditions promote a series of phenomena called thermal ageing which are the primary sources of catalyst deactivation [4][5][6]. The high temperature induces a series of physical processes that lead to a modification of the washcoat structure, generically defined as sintering. These phenomenon lead to a loss of active surface via structural modification of the porous support, with a decrease of surface area of the carrier [7,8]. Highly dispersed FTIR Surface Characterization of the Metal Particles and of the Support We consider now the characterization of noble metal surface sites and the evaluation of sintering phenomena by means of in situ FTIR experiments with CO as a probe molecule. Noble metals are present in the samples in very low concentration (1-2 wt.%), making it very difficult to follow specifically their evolution during ageing. Carbon monoxide is a very versatile molecule, widely used to probe both the acid/base properties of the surfaces of oxides and adsorption on metallic phases Due to the high affinity of noble metals for CO, adsorption on metal particles is detectable at room temperature, while adsorption on the support oxides requires low temperatures [21][22][23]. In a first experiment, CO adsorption at Room Temperature (RT) was performed on the washcoat powder extracted from a fresh catalyst sample (see the Materials and Methods Section below). The sample was outgassed in vacuum at 400 • C for 30 min. The absorbance spectra recorded upon CO dosage (not shown) did not show any trace of peaks associated to the interaction between the probe gas molecule and noble metal particles, irrespective of the partial pressure of CO on the sample. This lack of reactivity of the fresh catalyst was interpreted as originating from the low severity of the surface outgassing protocol, which did not clean the surface from foreign adsorbed species. In order to eliminate these species, an oxidationreduction pretreatment was adopted. The treatment with O 2 eliminates organic adsorbed contaminants by oxidation, and the successive reduction with H 2 reactivate the metal surface, reducing the eventually oxidized metal sites. Due to the better results obtained with the oxidation-reduction pretreatment, this was applied also to all the other samples, which were exposed to the ageing environment (25,000 km, 60,000 km, Lab-aged). In Figure 1 the CO adsorption measurements performed at RT after the oxidation-reduction pretreatment are presented. For the fresh samples (Figure 1a), IR spectra show two peaks, with intensity that increases with the increase of pressure of CO dosed to sample. The first one is around 2100-2000 cm −1 , typical of the stretching vibration of CO adsorbed in the on-top position on Pd 0 site, and the second peak is around 2000-1700 cm −1 , typical of CO stretching in bridge-bonded configuration between two Pd 0 sites [24,25]. Increasing the CO pressure, the peaks gain in intensity and show a progressive shift to higher wave numbers, due to the increase of the CO coverage, which enhance the dipole coupling [26]. Upon outgassing, the CO coverage decreases and this causes a red-shift towards the original position. The highest occupied molecular orbital (HOMO) of CO has a slightly anti-bonding character. Therefore, when transition metal centers are involved, thanks to the σ-donation and π-backdonation effects, involving the HOMO and LUMO of CO and the uppermost d states of the metallic phase, the appearance of absorption bands towards higher or lower frequency with respect to the gas phase (2143 cm −1 ) is expected [27]. The relative amount of on-top vs. bridged carbonyls can provide information on metal particles dispersion. However, the quantitative determination of the distribution of CO between on-top and bridge positions is very difficult and it was not possible on the collected data due to the overlap of the peaks. However, qualitative trends could be estimated. Pd tends to form bridged carbonyls when reduced, which are more stable than the linear ones [28]. A high contribution of the linearly bonded CO is an indication of a loss of dispersion of the Pd, because the proximity of two Pd atoms is necessary to form bridges. In Figure 1b, the IR spectra recorded at increasing CO pressure on 25,000 km washcoat powder sample are presented. Comparing the spectra of the fresh sample with those of the 25,000 km one (sample aged on vehicle after 25,000 km operation, see Materials and Methods), the overall intensity of carbonyl bands is highly decreased, indicating a loss of Pd active sites. This effect can be linked to sintering phenomena occurred at high temperature, leading to a growth of noble metal particles and to a loss of surface of the porous support, that could lead to an encapsulation of nanoparticles, making them completely inaccessible to the probe molecule. These effects are enhanced on the sample aged on vehicle for 60,000 km (60 k) (Figure 1c), where the absorbance peaks associated to CO adsorption have completely disappeared. A previous work had shown a good correlation between the 60,000 km samples and that aged in the laboratory at 1100 • C for 7 h in hydrothermal condition (air-Lab-aged) [29]. In order to verify this, the latter was analyzed using the CO probe technique. Even in this case, the stretching peaks of adsorbed CO were not detectable (spectra not reported for sake of brevity). Given the non-detectability of CO on metal particles on the 60,000 km sample, the more aged samples 80,000 km and FUL (Full Useful Life, see below) were not studied with this technique. Finally, it is worth noticing that the CO adsorption experiments did not detect distinct peaks for CO adsorption on Rh, which is present on the catalyst with 1:11 ratio with Pd, clearly too low for detection. We recognize that this is a limitation of our study, due to the great importance of Rh for NO x reduction functionality and to its peculiar deactivation mechanism [30], which should be addressed with more sensitive methods. between the 60,000 km samples and that aged in the laboratory at 1100 °C for 7 h in hydrothermal condition (air-Lab-aged) [29]. In order to verify this, the latter was analyzed using the CO probe technique. Even in this case, the stretching peaks of adsorbed CO were not detectable (spectra not reported for sake of brevity). Given the non-detectability of CO on metal particles on the 60,000 km sample, the more aged samples 80,000 km and FUL (Full Useful Life, see below) were not studied with this technique. Finally, it is worth noticing that the CO adsorption experiments did not detect distinct peaks for CO adsorption on Rh, which is present on the catalyst with 1:11 ratio with Pd, clearly too low for detection. We recognize that this is a limitation of our study, due to the great importance of Rh for NOx reduction functionality and to its peculiar deactivation mechanism [30], which should be addressed with more sensitive methods. Infrared analyses of the oxidic support with CO as a probe molecule require performing the adsorption at low temperature, due to the low binding energies involved. The Infrared analyses of the oxidic support with CO as a probe molecule require performing the adsorption at low temperature, due to the low binding energies involved. The support has an active role in the TWC and a complete characterization of these materials requires to understand whether the aging influences the support and in particular the oxygen storage capacity (OSC) of the material. This is the ability of the Ce-Zr mixed oxides support to adsorb/release oxygen during operation, storing it during a lean phase and releasing it during a rich one. OSC closely correlates with catalytic performances, because it leads to an increase of TWC efficiency by enlarging the air-to-fuel operating window [20]. Some samples were analyzed by CO adsorption at liquid nitrogen temperature (LNT). Figure 2 shows the spectra recorded on the fresh sample. They show an intense peak in the range 2200-2100 cm −1 , whose intensity increases with CO pressure. The peak red-shifts from 2177 to 2155 cm −1 upon increasing CO coverage due to chemical effect, i.e., on increasing CO adsorbed on the surface, the σ-donation decreases. Consequently, electron density in the slightly antibonding HOMO orbital of CO increases and the CO stretching decreases. This peak is probably generated by the superposition of different contributions of CO adsorbed on Ce 4+ and Zr 4+ cations, since their stretching vibrations fall in this spectral region [22]. The convolution in one peak of these contributions, which should be separated in the pure oxides, can be associated to a compositionally homogeneous solid solution of the Ce and Zr oxides, with negligible contribution of segregated pure oxide phases. At lower wavenumbers bands related to on-top (2086 cm −1 ) and bridged (1997 and 1940 cm −1 ) Pd 0 carbonyls are present [25]. Differently from measurements performed at RT, at LNT the bridged carbonyl region shows complex features with the presence of two bands. Typically the IR spectra of CO adsorbed on reduced Pd are interpretable on the basis of what occurs on the most stable Pd faces, Pd(111) and Pd(100) [31,32]. The peak at 1997 cm −1 is assigned to bridged carbonyls on Pd (100) faces, while the peak at 1940 cm −1 is related to bridged species on Pd (111) face. The band at 2086 cm −1 (2060 cm −1 for very low CO coverage) is related to linear carbonyls on Pd (111) face. As observable at RT as well, the band of linear carbonyls is asymmetrical due to a shoulder on the low wavenumber side. This shoulder is reasonably due to the presence of defects such as edges, corners and kinks, on noble metal particle surface [33]. The stability of the carbonyl species during the outgassing (dashed lines in the figures) further helps to discriminate between CO bonded to Pd and to the support. Pd carbonyls are more stable than carbonyls of the support, since Pd 0 bonds CO both by σ-donation and π-backdonation, while Zr and Ce cations bond CO exclusively via σ-donation and the related bands are completely removed by outgassing. These peaks display the same behavior observed at RT in aged samples, with a progressive disappearance of the signal due to sintering of noble metal particles ( Figure 2). The peaks associated to adsorption on the support material display a similar behavior: the intensity decreases upon ageing, due to the sintering of Ce-Zr oxides. This phenomenon causes not only a loss in intensity of the band but also a decrease in the chemical effect that causes a shift to higher frequencies at maximum coverage with respect to the fresh case. In particular, in the 60 k sample ( Figure 3a) a peak assigned to mono carbonyls on reduced Ce 3+ species appears at 2116 cm −1 (see Figure 3 insets). The assignment of this band to the carbonyls on the support cation is corroborated by the low stability of the species, which are removed by outgassing (dashed line in Figure 3a). The appearance of this peak can be linked to the effects of ageing: the high temperature sintering of these components hindered the Ce 4+ to Ce 3+ redox reaction [34]. In addition, there may be a contribution of chemical ageing, with Ce reacting with the phosphate compound present in the engine out gas, coming from the combustion of engine oil additives, forming the very stable CePO4 phase [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. These effects lead to immobilization of Ce, hindering its par- The peaks associated to adsorption on the support material display a similar behavior: the intensity decreases upon ageing, due to the sintering of Ce-Zr oxides. This phenomenon causes not only a loss in intensity of the band but also a decrease in the chemical effect that causes a shift to higher frequencies at maximum coverage with respect to the fresh case. In particular, in the 60 k sample ( Figure 3a) a peak assigned to mono carbonyls on reduced Ce 3+ species appears at 2116 cm −1 (see Figure 3 insets). The assignment of this band to the carbonyls on the support cation is corroborated by the low stability of the species, which are removed by outgassing (dashed line in Figure 3a). The appearance of this peak can be linked to the effects of ageing: the high temperature sintering of these components hindered the Ce 4+ to Ce 3+ redox reaction [34]. In addition, there may be a contribution of chemical ageing, with Ce reacting with the phosphate compound present in the engine out gas, coming from the combustion of engine oil additives, forming the very stable CePO 4 phase [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. These effects lead to immobilization of Ce, hindering its participation to redox reactions necessary for oxygen storage. In this case, the peak can be due to both this effect and is consistent with the reduction of the OSC observed during the performance tests (see below). The laboratory-aged ( Figure 3b) show a less intense peak associated to Ce 3+ , probably because in this case there is only the support sintering contribution. This sample has not been in contact with chemical species deriving from the combustion, so the chemical ageing effects in this case are not expected. As for the laboratory-aged sample, the contribution of Pd 0 -CO at 2096 cm −1 is quite evident. Moreover, it is worth to note that in the 60,000 km sample Pd 0 -CO might be associated to the shoulder of the peak at 2116 cm −1 , the Ce 3+ -CO band, which is absent for the laboratory aged one. Evaluation of Metal Particles Dispersion by CO Pulse Adsorption. The dispersion of the active metal particles has been studied with quantitative pulse CO chemisorption experiments, which allow to determine the adsorbed amounts and to model approximately the particle size [36]. The results of the CO-pulse experiment on the fresh sample are presented in Figure 4a,b, reporting the instantaneous and integrated TCD signal (proportional to the CO concentration at the reactor outlet). The first peaks are attenuated due to the chemisorption of CO on the surface of noble metal particles, while successive pulses gradually increase until saturation of the adsorption capacity of the metal, when the peaks reach a constant intensity. The CO uptake was calculated from the integral of the peaks (Figure 4b). The dispersion of Pd particles was calculated from the Pd:CO ratio. CO molecules can chemisorb on superficial Pd atoms in linear or bridged form and the coexistence of both the configurations was confirmed by IR spectroscopy. Since two IR peaks with similar intensities appearing after exposing the catalyst to CO, were ascribed to the two CO chemisorbed species, and considering their very similar ex- Evaluation of Metal Particles Dispersion by CO Pulse Adsorption. The dispersion of the active metal particles has been studied with quantitative pulse CO chemisorption experiments, which allow to determine the adsorbed amounts and to model approximately the particle size [36]. The results of the CO-pulse experiment on the fresh sample are presented in Figure 4a,b, reporting the instantaneous and integrated TCD signal (proportional to the CO concentration at the reactor outlet). The first peaks are attenuated due to the chemisorption of CO on the surface of noble metal particles, while successive pulses gradually increase until saturation of the adsorption capacity of the metal, when the peaks reach a constant intensity. The CO uptake was calculated from the integral of the peaks (Figure 4b). The dispersion of Pd particles was calculated from the Pd:CO ratio. CO molecules can chemisorb on superficial Pd atoms in linear or bridged form and the coexistence of both the configurations was confirmed by IR spectroscopy. Since two IR peaks with similar intensities appearing after exposing the catalyst to CO, were ascribed to the two CO chemisorbed species, and considering their very similar extinction coefficient (accordingly with literature), a Pd:CO ratio of 1.5 can be suitably assumed for calculations [37]. Using this assumption, from the amount of chemisorbed CO it is possible to calculate the number of Pd sites involved in the reaction and evaluate the ratio between this value and the quantity of Pd present inside the sample, obtained from chemical analysis (1% Pd in weight). The fresh sample exhibits a quite high palladium dispersion, around 51%, expressed in terms of ratio between the moles of Pd involved in total CO uptake and total Pd moles in the sample. This result indicate that the noble metal is distributed on the support in very small particles, consistently with the absence of visible Pd particles in the SEM observations reported in the following. In fact, an average metal particle size of about 2 nm can be estimated from CO chemisorption. For the 25,000 km sample, the situation changes significantly: the differences between the intensity of first peaks and peaks at saturation is much smaller, and the adsorbed amount is determined with low accuracy (please note the different scales of Figures 4 and 5). For the 25,000 km sample, the situation changes significantly: the differences between the intensity of first peaks and peaks at saturation is much smaller, and the adsorbed amount is determined with low accuracy (please note the different scales of Figures 4 and 5). The dispersion of Pd in this sample is decreased to 8%, and for the 60,000 km catalyst, it is below 1%, although in the latter cases the CO uptake is too low to obtain reliable values. These results point out that the metal particles are sensitive to sintering during the catalyst life cycle, as suggested by FT-IR measurement. The CO chemisorption test was performed also on laboratory-aged sample, in order to verify the correlation with 60,000 km one. Even in this case the data obtained do not allow to evaluate accurately the dispersion, which results below 1%. Due to the results obtained on these samples, samples from catalysts with higher ageing (80,000 km and FUL) were not analyzed. SEM Measurement of Metal Particles Dimensions. In a previous work [29], TEM images collected on fresh and FUL samples did not allow to observe the metal nanoparticles, due to the low contrast between noble metals and the substrate. The fresh sample washcoat displayed a coarse morphology, with porosity spanning a wide range of length scales. This porous structure was gradually lost with ageing, which showed a more compact washcoat [29]. In this work, we have collected high-resolution (HR) FE-SEM (Field Emission-Scanning Electron Microscopy) images of the different aged samples, highlighting the morphological effects of support and metal particles sintering. FE-SEM images of all the aged samples show the presence of bright small particles of uniform size, and a variable amount of large bright particles of different sizes, dispersed in the washcoat support. Figure 6 shows the case of the FUL sample. In order to identify these components, a series of EDX measurements, analyzing punctually these particles and recording a mapping on whole samples, was performed. The elements mapping highlights a high concentration of Pd in correspondence of lighter particles. Performing a single-point analysis on these bright zone, the presence of only Pd, O and C is observed where the dimensions of particles are sufficiently large to shadow the contribution from the support. Therefore, it is possible to identify these components as Pd/PdO particles, growth due to sintering of catalytic material. On the contrary, in the fresh sample, no particles were detected, but performing EDX analysis in different points, a constant Pd signal was recorded, originating from a uniform distribution of well dispersed nanoparticles. This is in agreement with CO chemisorption analysis. For the 25,000 km sample, the situation changes significantly: the differences between the intensity of first peaks and peaks at saturation is much smaller, and the adsorbed amount is determined with low accuracy (please note the different scales of Figures 4 and 5). The dispersion of Pd in this sample is decreased to 8%, and for the 60,000 km catalyst, it is below 1%, although in the latter cases the CO uptake is too low to obtain reliable values. These results point out that the metal particles are sensitive to sintering during the catalyst life cycle, as suggested by FT-IR measurement. The CO chemisorption test was performed also on laboratory-aged sample, in order to verify the correlation with 60,000 A series of diameter measurements was taken, in order to quantify the particles size growth upon ageing. Table 1 presents the minimum-maximum diameter values recorded on samples analyzed and the average value. The values presented are the results of measurement performed on 20 different Pd particles, detected in the same image. The diameter of noble metal particles increases with the ageing, overcoming 100 nm of average value from 60,000 km sample. This is in agreement with the results obtained by CO chemisorption followed by FT-IR and CO chemisorption measurements, which highlighted a high sintering of noble metal particles already in the 25,000 km sample. With a particle diameter above 100 nm the interaction of CO with the catalyst is limited, not enough to have to a complete deactivation of the material, but leading to results below the detection limits of the applied techniques. FUL sample is confirmed to be the worst case, with Pd agglomerates up to 0.8 µm in diameter. tion from the support. Therefore, it is possible to identify these components as Pd/PdO particles, growth due to sintering of catalytic material. On the contrary, in the fresh sample, no particles were detected, but performing EDX analysis in different points, a constant Pd signal was recorded, originating from a uniform distribution of well dispersed nanoparticles. This is in agreement with CO chemisorption analysis. A series of diameter measurements was taken, in order to quantify the particles size growth upon ageing. Functional Characterization of Catalysts Samples We consider now the functional tests performed on the samples: Figure 7 presents the light off temperatures (T50) for the oxidation of CO and HC, recorded during the efficiency tests and the results of the oxygen storage capacity (OSC) determination. The light off temperatures increase with ageing for both the reactions, highlighting the effect of catalysts deactivation. The differences of T50 between CO and HC are small, less than 10 • C in almost all the samples tested, except in the case of the FUL sample, which shows a much higher light off temperature for HC than for CO abatement. The oxygen storage capacity decreases significantly upon ageing (Figure 7b). The OSC for the 25,000 km sample is half of that of the fresh one. Similarly, we observe almost a halving of the OSC between 25,000 km and 60,000 km. This phenomenon is linked to the redox properties of the oxygen storage components that were found to drastically change with increasing ageing, due to thermal and chemical ageing, as already highlighted by LNT FT-IR measurements. In terms of performances, the Lab-aged samples are quite similar to the 60,000 km sample, with almost equal light-off temperatures. OSC displays some differences between these samples, with lower values for vehicle-aged samples, probably due to the chemical ageing contribution. In the FUL sample, the oxygen storage capacity is completely compromised. Discussion The characterization of all the samples through the applied techniques allows observing the evolution of the catalyst upon ageing, providing a consistent set of qualitative and quantitative data. Sintering of the noble metal particles and Ce-Zr oxides support leads to Discussion The characterization of all the samples through the applied techniques allows observing the evolution of the catalyst upon ageing, providing a consistent set of qualitative and quantitative data. Sintering of the noble metal particles and Ce-Zr oxides support leads to a loss of catalytic functionality. The Pd particles undergo rapid agglomeration, with a rapid loss of dispersion and increase of particles radius. This leads to a loss of active sites, as proven by FT-IR measurements at RT and LNT, with a progressive disappearance of signals associated to the vibrational mode of CO molecules interacting with noble metal active sites. These physical changes are reflected in an increase of the temperature of light off measured during synthetic gas bench tests. In addition, the FTIR measurement at liquid nitrogen temperature allows to observe the sintering of Ce-Zr oxides, but also the contribution of the chemical ageing phenomena, which correlate with a loss of oxygen storage capacity of the materials. This is particularly true for 60,000 km sample, where it is possible to observe the combination of these two effects. By comparing the on-vehicle and the lab-aged samples, we conclude that the 60,000 km and laboratory aged samples are very similar in terms of Pd sintering, while the lab aged shows a lower contribution of support ageing, probably due to absence of chemical ageing effects. Also in terms of catalytic performance and catalyst morphology, there are similarities between these two samples. Some differences remain due to the chemical ageing, which was not possible to replicate on the laboratory scale. Another interesting conclusion is that the FUL engine bench aged sample represents a limit case, displaying results very far from both the vehicle-aged and the laboratory-aged samples. This is remarkable considering the fact that the oven ageing methods were originally conceived as a laboratory FUL testing method. On the contrary, they proved to be more representative of an intermediate mileage ageing. We recognize that other factors, not studied in this work, may affect the catalyst performance upon ageing: the evolution of the Rh component of the metal particles, coke formation, and the evolution of the acid-base properties of the support. These additional factors belong to a higher level of detail as compared with the main phenomena studied in this work. Nevertheless, the data we have collected represents a reference framework for the development of improved laboratory ageing protocols, meeting the demand of an increased catalyst lifetime. Catalysts The chosen catalyst technology, already used in our previous work [29], is a commercial formulation, based on double metal Pd/Rh technology (55:5), with a total PGM loading of 60 g/ft 3 (1 wt.%), supported on a porous substrate mainly composed by Ce, Zr and Al oxides, with dopants (Ba, La). The component, a cylindrical ceramic monolith of 0.55 L (4.2 × 2.5 ) characterized by a high density of channels (900 cpsi) with hexagonal shape, was core drilled to obtain two cylindrical samples of 1.5 of diameter and 2.5 length, one for the morphological characterization, one for the functional testing. The reference real samples came from two vehicles equipped with 1.2 L, 69 hp (Horse Power) gasoline engines compliant with the E6 b emission standard. The ageing mileage were 25,000, 60,000 and 80,000 km. A sample came from an engine bench ageing cycle: an engine-dynamometer set-up (the same engine of the vehicle) used to perform an accelerated ageing protocol to simulate a catalyst aged at full useful life of the vehicle 160,000 km, in severe driving condition (denoted as Full Useful Life (FUL) in the paper). For the laboratory deactivation protocol, a sample was core-drilled from a fresh monolith and aged using a tubular oven equipped with a gas line and a peristaltic pump, in order to flux a mixture of the air and water vapor in the sample during the ageing operation. The temperature and the time of ageing: were 1100 • C for 7 h, a common industrial standard to simulate an end-of-life condition in hydrothermal oxidizing environment. The ageing was performed in pure air flux (4 L/min), with 10% water (sample denoted as Lab-aged in the paper). All the cored samples extracted from the fresh monolith were conditioned, before undergoing testing, in a static oven at 650 • C for 2 h to eliminate any possible residue of production intermediates. Noble Metal Characterization Techniques The washcoat was extracted from the first batch of cored sample, in order to remove the contribution of the ceramic support, unnecessary to ageing phenomena characterization. This extraction was performed using a method developed by some of us and described in a previous work [38], based on the detachment of the material by thermal shock, performed soaking the sample first in liquid nitrogen, then in water, blowing with gaseous N 2 to remove water residues from channels, to avoid substrate detachment due to ice formation. The washcoat extracted was dried obtaining a fine powder. Washcoat powders were compressed in self-supporting discs (~20 mg cm −2 ) and placed in IR cells suitable for high-temperature treatments and FT-IR analysis at room and liquid-nitrogen temperature (RT and LNT, respectively). Measurements performed at LNT allow to characterize the metal phase and the oxidic support as well. Before the IR measurements, the samples were pretreated, following two different protocols. The first one was a treatment in vacuum at 400 • C for 30 min. The second one was an activation protocol through oxidation and subsequent reduction of the material. The oxidation step was performed in a static system using dry O 2 on the sample at 400 • C. Oxygen was used twice, at about 40 mbar for 15 min per each step: the first treatment was useful to remove reduced species, possibly adsorbed on the surface; the second treatment was performed to ensure the oxidation. The reduction step was performed in the same way as the previous one, but the treatment was carried out in H 2 . Hydrogen was used twice at 40 mbar for 15 min per each step. The double treatment was intended to ensure the reduction. Finally, the sample was outgassed and cooled to RT in vacuum. Absorption/transmission IR spectra were run on a Perkin-Elmer FTIR System 2000 spectrophotometer (Perkin-Elmer, Waltham, MA, USA) equipped with a Hg-Cd-Te cryo-detector, working in the range of wavenumbers 7200-580 cm −1 at a resolution of 2 cm −1 , with 60 scans. IR absorbance spectra were acquired at RT and LNT on the pretreated samples at increasing CO pressure up to 20 mbar. The distribution and the dispersion of noble metal particles in washcoat samples were analyzed via pulse CO chemisorption. These analyses were performed by using a Thermo Scientific 1100 TPDRO (Thermo Fisher Scientific, Waltham, MA, USA) equipped with a thermal conductivity detector (TCD). The catalyst powder (200 mg) was placed in a quartz reactor, between two quartz wool layers. A reducing pre-treatment was carried out in a 20 mL min −1 flow of 5% H 2 in Ar, by heating the sample up to 300 • C with a 10 • C/min ramp and maintaining this temperature for 120 min. CO chemisorption was performed at room temperature, using He as carrier gas (20 mL min −1 flow) and sending pulses of 800 µL of a mixture of 10% CO in He until saturation [26]. TCD detects differences in the thermal conductivity between the gas entering and leaving the reactor. Therefore, to detect changes in CO concentration, He must be used as a carrier gas, since it has a very different thermal conductivity. During the test, He flows continuously and every 10-15 min a CO injection is made into the He flow. After a few minutes the detector reveals a signal proportional to the CO output concentration (the presence of this gas lowers the average thermal conductivity of the mixture). Each pulse is therefore recorded as a TCD peak whose area is proportional to the non-adsorbed CO. If chemisorption occurs, the area of the peaks is initially smaller, and then it gradually increases until saturation of the surface. The amount of chemisorbed CO on the sample can be calculated from the integration of the peaks. From this datum, it is possible to estimate the metal dispersion [28]. In order to observe noble metals particles and follow their evolution during the ageing, all the samples were observed using a field emission scanning electron microscope: a FE-SEM TESCAN S9000G (TESCAN ORSAY HOLDING, Brno, Czechia) equipped with a Schottky type FEG source. The detectors were an In-Beam Secondary Electron (SE) detector with a resolution of 0.7 nm at 15 keV, and a back scattered electron detector (BSD). Elemental analysis was measured with an EDX detector Ultima Max OXFORD (Oxford Instruments, Abingdon, UK) with AZTEC software. Catalyst Functional Characterization The second set of cored samples was prepared and used to evaluate the catalytic performances, performed on the full component (support + washcoat). The samples were tested on a synthetic gas bench specifically designed for functional testing of automotive catalysts (Figure 8). A synthetic gas mixture (0.7% O 2 , 15% CO, 10% CO 2 , 1875 ppm HC, 600 ppm NO and 1% H 2 O), based on a carrier flow of N 2 (SV = 50 k h −1 ) was heated to the desired temperature and fed to the heated catalyst. The gas mixture was analyzed sampling both before and after the catalyst, in order to evaluate the flux composition and its variation during the reactions. This analysis was performed using a FT-IR Multi-gas Analyzer MKS 2030 HS for the simultaneous analysis of all chemical species. The gas mixture composition was studied to replicate typical gasoline engine emissions, using a HC mixture composed by of short-chain hydrocarbons (ethylene, propylene and methane 3,5/1/1). This system is not sufficiently fast to replicate the fluctuation between lean (oxidant) and rich (reducing) conditions, so it works near the stoichiometric condition (λ = 1) in a slightly lean environment. Two types of test were performed: first, the conversion efficiency of CO and HC (due to lean environment chosen) was measured flowing the feed gas mixture over the sample upon a temperature increase (ramp rate of 30 • C/min from ambient temperature to 500 • C). From this result, the light-off temperature (T50) of the catalyst was obtained, defined as the temperature in which the catalyst reaches 50% CO or HC conversion efficiency [39]. The second test was focused on the evaluation of the oxygen storage capacity (OSC) of the component, whose reduction upon ageing is extremely detrimental to catalyst functionality. This measurement was performed at constant temperature (400 • C), saturating the sample with O 2 and subsequently measuring the CO consumption upon dosage of a CO-rich feed (7000-8000 ppm). All the tests were performed three times, in order to verify the reproducibility of the results. Catalyst Functional Characterization The second set of cored samples was prepared and used to evaluate the catalytic performances, performed on the full component (support + washcoat). The samples were tested on a synthetic gas bench specifically designed for functional testing of automotive catalysts (Figure 8). A synthetic gas mixture (0.7% O2, 15% CO, 10% CO2, 1875 ppm HC, 600 ppm NO and 1% H2O), based on a carrier flow of N2 (SV = 50 k h −1 ) was heated to the desired temperature and fed to the heated catalyst. The gas mixture was analyzed sampling both before and after the catalyst, in order to evaluate the flux composition and its variation during the reactions. This analysis was performed using a FT-IR Multi-gas Analyzer MKS 2030 HS for the simultaneous analysis of all chemical species. The gas mixture composition was studied to replicate typical gasoline engine emissions, using a HC mixture composed by of short-chain hydrocarbons (ethylene, propylene and methane 3,5/1/1). This system is not sufficiently fast to replicate the fluctuation between lean (oxidant) and rich (reducing) conditions, so it works near the stoichiometric condition (λ = 1) in a slightly lean environment. Two types of test were performed: first, the conversion efficiency of CO and HC (due to lean environment chosen) was measured flowing the feed gas mixture over the sample upon a temperature increase (ramp rate of 30 °C /min from ambient temperature to 500 °C). From this result, the light-off temperature (T50) of the catalyst was obtained, defined as the temperature in which the catalyst reaches 50% CO or HC conversion efficiency [39]. The second test was focused on the evaluation of the oxygen storage capacity (OSC) of the component, whose reduction upon ageing is extremely detrimental to catalyst functionality. This measurement was performed at constant temperature (400 °C), saturating the sample with O2 and subsequently measuring the CO consumption upon dosage of a CO-rich feed (7000-8000 ppm). All the tests were performed three times, in order to verify the reproducibility of the results.
2021-05-08T00:04:18.632Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "6de06a82b33f04b3e54077999fc612267f6be287", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/11/2/247/pdf?version=1614235824", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4d605ab5e1ba4b2c186afe1a9d7aa6023c4db5d8", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
119122984
pes2o/s2orc
v3-fos-license
Comparison theorems for the small ball probabilities of Gaussian processes in weighted $L_2$-norms We prove comparison theorems for small ball probabilities of the Green Gaussian processes in weighted $L_2$-norms. We find the sharp small ball asymptotics for many classical processes under quite general assumptions on the weight. Introduction The problem of small ball behavior of a random process X in the norm · is to describe the asymptotics as ε → 0 of the probability P{ X ≤ ε}. The theory of small ball behavior for Gaussian processes in various norms is intensively developed in recent decades, see surveys [16], [15] and the site [17]. By the classical Karhunen-Loève expansion, one has the equality in distribution Here ξ j , j ∈ N, are independent standard Gaussian random variables while λ j > 0, j ∈ N, are the eigenvalues of the integral equation In the papers [19,20] there was selected the concept of the Green process, i.e. Gaussian process with covariance being the Green function for a self-adjoint differential operator. The approach developed in these papers allows to obtain the sharp (up to a constant) asymptotics of small deviations in L 2 -norm for this class of processes. In the papers [1,2], using this approach, we have calculated the sharp asymptotics of small ball probabilities for a large class of particular processes with various weights. In this paper we prove a comparison theorem for the small ball probabilities of the Green Gaussian processes in the weighted L 2 -norms. This theorem gives us the opportunity to obtain the sharp small ball asymptotics for many classical processes under quite general assumptions on the weight. For the Wiener process and some other processes this result was obtained in [4]. Let us recall some notation. A function G(t, s) is called the Green function of a boundary value problem for differential operator L if it satisfies the equation LG = δ(t − s) in the sense of distributions and satisfies the boundary conditions. The space W m p (0, 1) is the Banach space of functions y having continuous derivatives up to (m − 1)-th order when y (m−1) is absolutely continuous on [0, 1] and y (m) ∈ L p (0, 1). V(. . . ) stands for the Vandermonde determinant. The calculation of the perturbation determinant Let L be a self-adjoint differential operator of order 2n, generated by the differential expression Lv and boundary conditions Here and for any ν at least one of coefficients α ν and γ ν is not zero. We assume that the system of boundary conditions (2) is normalized. This means that the sum of orders of all boundary conditions κ = ν k ν is minimal. See [3, §4]; see also [10] where a more general class of boundary value problems is considered. For large |ζ|, the functions ϕ j (t, ζ) are linearly independent. Therefore there exists a matrix C(ζ) = (c jk ) 0≤j,k≤2n−1 depending on ζ such that Thus, By the initial conditions we have By the relations (5), we obtain for ). Now we consider the second problem in (3) and define the function F 2 (ζ) similarly to F 1 (ζ) with ψ 2 instead of ψ 1 . Then the following relation holds: Moreover, the quotient |F 2 (ζ)/F 1 (ζ)| is uniformly bounded on circles |ζ| = r k for a proper sequence r k → ∞. Further, by continuity of solutions to a differential equation with respect to parameters, we have F 1 (ζ)/F 2 (ζ) ⇒ 1 as ζ → 0. Separated boundary conditions Now we consider an important particular case. Theorem 2. Let the assumptions of Corollary 1 be satisfied. Suppose also that the boundary conditions (2) are separated in main terms, i.e. have the form Denote by κ 0 and κ 1 sums of orders of boundary conditions at zero and one, respectively: Proof. Under assumptions of the Theorem the matrix determining θ −1 (ψ) is block diagonal, and we obtain Many classical Gaussian processes satisfy the assumptions of Theorem 2. We give several examples. For a random process X(t), 0 ≤ t ≤ 1, denote by X (any index β ν equals 0 or 1). Namely in Theorem 2 one should set n = m + 1, We substitute these quantities into (7) and obtain The asymptotics of probability P( W In a similar way, using [1, Propositions 1.6 and 1.8], we obtain the following relations. Proposition 2. Let B(t) be the Brownian bridge. Then, under assumptions of Proposition 1, the following relation holds: be conditional integrated Wiener process (see [13]). Then, under assumptions of Proposition 1, the following relation holds: . Let us introduce the notation ε n = ε n sin π 2n . The following relations can be obtained using [ Proposition 4. Let U(t) be the Ornstein-Uhlenbeck process, i.e. the centered Gaussian process with the covariance function EU(t)U(s) = e −|t−s| . Then, under assumptions of Proposition 1, the following relation holds: Proposition 5. Let S(t) = W (t + 1) − W (t) be the Slepian process (see [21]). Then, under assumptions of Proposition 1, the following relation holds: m ψ ≤ ε). 6 Proposition 6. Let M (n) (t) be the Matern process (see [18]), i.e. the centered Gaussian process with the covariance function Then, under assumptions of Proposition 1, the following relation holds: Remark. It is well known that It is easy to see that the formula from Proposition 4 with m = 0 coincides with the formula from Proposition 6 with n = 1. Non-separated boundary conditions If some boundary conditions are not separated in the main terms, they can be split into pairs of the following form (see [3, §18]): We consider the case with a unique such pair. Theorem 3. Let the assumptions of Corollary 1 be satisfied. Suppose also that one pair of boundary conditions has the form (8) while other ones are separated in the main terms 1 : Denote by κ 0 and κ 1 the sums of orders of separated boundary conditions at zero and one, respectively: Proof. We have ) . Since . Now Corollary 1 implies (9).
2012-11-10T19:32:45.000Z
2012-11-10T00:00:00.000
{ "year": 2012, "sha1": "97dffca42823a340dcc2c8e77d0ffc8bcfe35c46", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "97dffca42823a340dcc2c8e77d0ffc8bcfe35c46", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
246093556
pes2o/s2orc
v3-fos-license
Chemical–biological approaches for the direct regulation of cell–cell aggregation Cell–cell aggregation is one of the most well‐known modes of intercellular communication. The aggregation also plays a vital role in the formation of multicellularity, thus manipulating the growth and development of organisms. In the past decades, cell–cell aggregation‐related bioprocesses and molecular mechanisms have attracted enormous interest from scientists in biology, and bioengineering. People have developed a series of strategies to artificially regulate cell–cell aggregation through chemical–biological approaches. To date, not only the chemical reagents such as coordination compounds and polymers but also the biomacromolecules such as proteins and nucleic acids, are employed as the “cell glue” to achieve the control of the cell aggregation. So it is meaningful to review the recent advances of the chemical–biological approaches in cell–cell aggregation manipulation. In this review, we discuss the mechanisms and features of recently developed strategies to control cell–cell aggregation. We introduce molecules and designs relying on chemical reactions and biological conjugations respectively, and talk about their advantages and suitable applications. A perspective on the challenges in future applications in cell manipulation and cell‐based therapy is also proposed. We expect this review could inspire innovative work on manipulating cell–cell aggregation and further modulate cell–cell interactions in the research of bio/chemical fields. INTRODUCTION Cell-cell aggregation, which biologically appears in cell recognition and cell communications, has been widely known as an essential process of cell-cell interactions, formation of multicellular organisms, and integration of biofunctions. [1,2] Naturally, aggregation between cells is achieved through the interactions of particular molecules anchoring on cell surfaces. Through cell-cell aggregation, basic cell behaviors, such as cell activation, apoptosis, migration, proliferation, and differentiation, are elaborately controlled to ensure the regulatory status of the physiological processes including the regulation of tissue homeostasis, the immune response, and so on. [3] For example, recent researches have reported that the circular tumor cells would perform accelerated metabolism once they have formed into aggregated clusters. [4][5][6][7] Accordingly, these intercellular reactions should be highly specific and tightly regulated. Accumulated evidence has shown that many diseases such as metabolic disorders, autoimmune disruption, and even cancers are related to dysfunctional cell-cell aggregation. [8,9] Therefore, precise cell-cell aggregation manipulation would help push forward the study of various cellular processes in the molecular mechanism. Artificial control of cell-cell aggregation has shown great potential not only in cell biology study, but also in many research fields including tissue engineering, organ reproduction, and cell-based diagnosis and treatment. [10][11][12][13][14] Nowadays, numerous strategies have been developed to achieve programmed cell-cell aggregations, thus promoting people to better understand the biological processes in living organisms and to deal with different diseases. [15,16] In natural biosystems, the intercellular interactions are usually modulated by cell-surface molecules including nonspecific aggregation molecules and specific receptor/ligand. Although different cells may be close to each other randomly during migration, such kind of proximity may not initiate any communications between them, which means that regulated cell-cell aggregation may only occur between a few specific cells. To expand the dimension of cell-cell aggregation, scientists attempted to artificially modify the cell surfaces by molecules that may interact with each other for connection or crosslink. [17] Originally, cellular engineering through genetic manipulation has been exploited to modulate the expression of specific surface receptors to improve the junction of the cells. However, the expression of endogenetic proteins is not easy to fine-tune, and the overexpression may cause undesired cell phenotypes. In addition, when this method is translated from the lab to the clinic, the obstacles including low survival and poor tissue specificity are also difficult to deal with. [18,19] Due to the development of biochemistry and molecular biology, now the novel "cell glue" molecules, including chemical molecules and biomolecules, can be artificially modified on the surface of target cells, thus achieving regulated cell aggregation. In general, the major challenges of a suitable "glue" design include (1) freely modified on cell membrane without affecting the cell function and activation; (2) stably anchored on cell membrane without continuously internalized, degraded and replaced; (3) efficiently reacted with each other to build the crosslink under cellular conditions. To date, the biomolecules-based "glue" such as complementary DNA strands, antibodies, aptamers, and acceptorligand systems (e.g., biotin-streptavidin) have demonstrated higher biocompatibility, specificity, and programmability. On the other hand, the nonbiomolecules such as polymers and host-guest molecular pairs have showed better affinity and lower cost. The development of new approaches will provide exciting tools for people who are interested in the scientific fields of biology, pharmacology, and materialogy. Therefore, it is meaningful to summarize the recent design strategies people have introduced for manipulating cell aggregation. In this review, the reported strategies for cell aggregation, including both chemical methods and biological methods, will be introduced, corresponding reaction mechanisms will be detailed, their advantages and limitations will be discussed. We are hoping that this review can give guidance to scientists in biology and chemical biology when they are trying to pick up the most suitable "glue" for study in cellrelated research, as well as inspire scientists to provide more promising "glue" strategies to promote the application of these encouraging tools. NONBIOLOGICAL MOLECULES-INDUCED CELL AGGREGATION To achieve precise control of cell assembly, the most superior step to be dealt with is to anchor the "glue molecules" on the cell surface stably. As we know, the cell membrane has a heterogeneous, complex, and dynamic biological structure consisting of many lipids, proteins, and carbohydrates, which contains specific functional groups such as primary amines, thiols, and diols. These functional groups may involve in versatile chemical conjugation reactions. On this account, many chemical reagents have been designed for specific modification of target cell surface. Once a surface modifica-tion is achieved, artificially manipulated cell-cell aggregation can be achieved through different chemical reaction routes including covalent connection, host-guest interaction, polymerization, and so on. For chemical reaction-induced cell aggregation, a fundamental requirement is that the reaction should not interfere with the cell activity and its physiological function. In 2000, Bertozzi and coworkers introduced the "Staudinger ligation" reaction, proposing a new term of bioorthogonal reaction. [20] To date, the well-developed bioorthogonal chemistry methodology has provided several chemical reaction approaches such as Cu-catalytic azidealkyne click (CuAAC), Staudinger ligation, cross-metathesis, and else. [21] These bioorthogonal reaction methods not only can help to modify the desired compounds on the surface of target cells but also make the linkage between cells possible. [22,23] As an improvment, scientists can also break the established linkage through photoisomerization, introduction of competitive molecules or other designed methods to realize reversible cell-cell aggregation. Reversible programming of cell-cell aggregation has the following advantages: (1) It can realize the reuse of surface-altered tool-cells; (2) it can better explore the influence temporally on dynamic cellcell aggregation and intercellular signal transduction; (3) it also provides the possibility for some extended applications, such as cell sorting, cell patterning and tissue capture/release biotechnologies. In this chapter, we will introduce those pure chemical molecules and strategies people employed, according to the reaction mechanisms used for inducing cell-cell aggregation. Covalent chemical reaction Generally, cell-cell connections can be actualized through the direct reaction between the molecules modified on the surfaces, so the covalent linkages are usually the first choice due to the stable chemical bonds and diverse designs. Click chemistry is a typical reaction that can provide stable covalent bonds with high reaction efficiency under gentle reaction conditions. Since the first proposition by Sharpless in 2001, [24] click chemistry has been widely used in drug development, biomedical material construction, and other fields. Taking its advantages of high yield, inoffensive by-products, simple reaction conditions, and fast reaction rate, many strategies for manipulating cell aggregation based on click chemistry methods have been developed. Yousaf group [25] realized the cell assembly via oximebased click chemistry ( Figure 1A). They constructed lipid vesicles modified with ketone and oxyamine groups, respectively. Then the functional groups were delivered to membranes of different cell populations by liposome fusion process and subsequently reacted with each other via oxime ligation to enable the modified cells to connect with each other. Using this method, researchers constructed small threedimensional (3D) spherical cell assemblies and even large and dense 3D multilayered tissue-like structures from bottom to up. In this strategy, the bioprocess-mimicking lipid fusion helped the functional groups to insert into target cells without disrupting the normal cell physiological process. Moreover, the generated covalent oxime bond was endowed with large bond energy, thus ensuring the high stability of the cell aggregates. As an improvement, the same group has embedded a F I G U R E 1 Covalent chemical reaction for cell aggregation. (A) General schematic of bio-orthogonal ketone and oxyamine molecules for subsequent chemoselective oxime ligation to realize cell-cell aggregation. Reproduced with permission: Copyright 2011, American Chemical Society. [25] (B) Schematic describes the molecular level control of tissue assembly and disassembly via a photo-switchable cell surface engineering approach. Reproduced with permission: Copyright 2014, Nature. [26] (C) Scheme shows dynamic liposome-liposome fusion and liposome-cell fusion for tailoring cell surfaces. Reproduced with permission: Copyright 2014, Wiley. [27] (D) Illustration of the cellular gluing method based on metabolic glycoengineering and double click chemistry. Reproduced with permission: Copyright 2015, Wiley. [28] (E) Illustration of the chemically detachable cellular glue system based on click chemistry linkers with degradable disulfide bonds. Reproduced with permission: Copyright 2016, American Chemical Society [29] photo-cleavable center between the lipid hydrophobic moiety and the oxyamine group ( Figure 1B). [26] Under mild UV radiation (365 nm, 10 mW/cm 2 , 5 min), the UV-controlled component could be cleaved, leading to the completed dispersal of the assembled cell aggregates. Then a remote, spatial and temporal control of cell interactions could be realized. Besides this photo-switchable method, Yousaf group [27] proposed an electrochemical reduction strategy to modulate the assembly and disassembly of cells. As shown in Figure 1C, instead of the ketone group, hydroquinone was delivered onto the cell surface via liposome fusion. After activating by chemical (CuSO 4 ⋅5H 2 O, 5 min) or electrochemical oxidation (−100 to 650 mV, 100 mV/s), hydroquinone converted to quinine and further reacted with oxyamine, thus connecting target cells via the oxime bond. The oxime would be cleaved under reductive potential (−100 mV, 10 s), causing a quick reverse to the original hydroquinone and oxyamine on the cell surfaces. In contrast to photo-controlled disassembly, this electrochemical method realized cyclic conversion between the cell assembly and disassembly with a more rapid reaction rate. In addition to the ketone-oxylamine condensation reaction, azide-dibenzocyclootyne (DBCO) and trans-cyclooctene (TCO)-tetrazine (Tz) are also frequently-used covalent chemical linkers for cell aggregation because of their aqueous-phase reactions and copper-independent nontoxic catalyst. In recent years, Yun group [28] established cell-cell contacts via the double click chemistry method ( Figure 1D). First, the azide (N 3 ) groups were introduced onto the cell membrane through metabolic glycoengineering. Next, the N 3 groups on different cells would react with DBCO preattached with Tz or TCO, respectively. Finally, the obtained Tz-and TCO-modified cells were mixed together and connected with each other through the Tz-TCO click chemistry reaction within 10 min. In addition to the advantage of efficient linkage, the formed Jurkat T-NIH3T3 cell pair showed good mobility, high stability, and strong linkage at flow rates up to 60 ml/min corresponding to shear stress over 20 dyn/cm 2 , which is higher than typical vessel-wall shear stress in veins and arteries. Moreover, they demonstrated that after being intravenously injected into live mice, the Jurkat T-Jurkat T cell pairs could be observed in circulation and in the lung F I G U R E 2 Typical host-guest interaction for cell aggregation. (A) Structures of hosts and guests for vesicles aggregation and corresponding lightresponsive mechanism. Reproduced with permission: Copyright 2010, Wiley. [31] (B) Reversible manipulation of cell assembly and disassembly by lightresponsive host/guest pair. Reproduced with permission: Copyright 2016, Nature. [32] (C) Schematic representation of the supramolecular functionalization of cell surfaces via targeting of the membrane-receptor CXCR4 (green). Reproduced with permission: Copyright 2017, Nature [34] tissues, suggesting the potential of a novel cell delivery strategy by attaching a cargo cell to a mobile carrier cell. To extend this method, Yun group [29] synthesized DCBO-SS-Tz/TCO which has an inserted disulfide bond (S-S) between DCBO and Tz/TCO ( Figure 1E). Based on the degradability of the S-S bond caused by glutathione (GSH), controlled disassembly of cell pairs could be realized with the addition of 5 μM GSH in only 10 min. They also showed that the introduction of the S-S bond would not decrease the strength of the Tz-TCO connection. The great efficiency of both the assembly and disassembly of the cell aggregates ensures the application of the new method in tissue engineering and cell biology fields. Host-guest interaction Host-guest interaction is a well-noticed reaction in many research fields due to its simple reaction conditions, good reaction specificity, and high reaction efficiency. As one of the star host molecules, cyclodextrin (CD), which has been widely used in many biological applications such as drug delivery and biomedical materials synthesis, [30] has attracted great interest from people in cell aggregation manipulation. Ravoo group [31] prepared a unilamellar CD (α-CD or β-CD) bilayer vesicle as a versatile model system for the recognition, adhesion, and fusion of biological cell membranes (Fig-ure 2A). Two vesicles made from α-CD and β-CD, respectively, are linked with the addition of azobenzene (azo) because azo can react with the α-CD and β-CD through the host-guest interaction. Similarly, tert-butylbenzene is also a specific guest for β-CD, which can connect with two vesicles containing β-CD, thus regulating cell-cell contact via host-guest interaction. Qu group [32] introduced a strategy to achieve a fast and convenient reversible cycle of cell assembly and disassembly ( Figure 2B). They first modified the cell membrane with azide group through metabolic glycoengineering, then the host molecule of alkynyl-PEG-β-CD (PEG = polyethylene glycol) was installed successfully on the cell surface utilizing CuAAC. A series of azobenzene derivates were employed as the guest molecules. The trans-azobenzene could transform into cis-azobenzene under UV light irradiation and go back to trans-configuration with visible-light irradiation, while the cis-azobenzene might not react with β-CD through host-guest reaction because of the steric hindrance. Such a design provided spatiotemporal control for cell engineering. For example, the introduction of fluorescent-labeled azo achieved controllable cell imaging and the introduction of a homobifunctional guest molecule (azo-PEG-azo) made reversible cell assembly successful. Through this method, they have built up a cell cluster between peripheral blood mononuclear cells (PBMCs) and targeted cancer cells, and demonstrated that the programmable cell aggregation might induce the apoptosis of cancer cells, thus providing a new cell-based therapy for different diseases. In recent years, the same group improved their method mentioned above. They utilized lipid-modified β-CD (lipid-PEG-β-CD) to anchor cell membrane via hydrophobic insertion, thus avoiding the complicated process of β-CD introduction previously used. [33] They showed an easier way to achieve reversible cell aggregation manipulation. Different from click chemistry, host-guest interaction can induce cell-cell connection through a noncovalent bond, which is beneficial to provide reverse assembly due to the low bond energy. In addition to azobenzene, adamantane (Ad) is another accepted guest for cyclodextrin. Rood et al. introduced Ad to cell surface utilizing specific binding between chemokine receptor 4 (CXCR4) and Ac-TZ14011 peptide ( Figure 2C). [34] The polymer containing multiple β-CD molecules was used as a linker to hold multiple Ad molecules together. It is reported that Ad-β-CD has a higher association constant than azo-β-CD, [32] thus achieving a more stable cell-cell adhesion for further cell/tissue engineering applications. Moreover, they also demonstrated that the excessive β-CD between cells can further be functionalized with fluorophores or therapeutic agents for signal label or drug delivery. Polymeric technology Several works have linked polymeric technology with the applications of living cell encapsulation, [35] cell surface receptor aggregation, [36] modulation of cell-surface properties, [37] and so on. However, the applications of polymers in direct manipulation of cell-cell aggregation were barely reported. The investigation of polymerization reaction on the living cell surface is still in an early stage because of the difficulty to maintain cell viability under the polymerization reaction conditions due to the use of cytotoxic transition-metal catalysts, organic solvents, and the production of radical species during the reaction. The good news is that a few attempts have been introduced to regulate cell aggregation by presynthesized polymers. The Yusa group [38] has successfully introduced a typical method to utilize the phase-separation property of polymers in lower critical solution temperature (LCST) to realize cell aggregation. Poly(N-isopropylacrylamide), an ideal thermosensitive material extensively applied in biomedical systems, was employed because of its near-body-temperature LCST between 32 • C and 35 • C. The methacryloyl group was first delivered to sialic residues on the cell surface by metabolic glycoengineering and further reacted with terminal thiol-modified poly(Nisopropylacrylamide) (PNIPAM-SH) by thiol-ene reaction under 365 nm UV irradiation, thus anchoring the PNIPAM on the cell membrane. At a temperature lower than LCST (25 • C), the polymers showed hydrophilicity because the hydrophilic functional groups of the polymer chains could form hydrogen bonds with water molecules. Therefore, the PNIPAM might maintain in swelling state and keep the cells leave apart. If the temperature rose to 37 • C, which was higher than its LCST, enhanced hydrophobic interaction could make PNIPAM dehydrate and shrink, thus inducing cellcell aggregation. Di(ethylene glycol) methyl ether methacrylate (DEGMA) is another promising thermosensitive polymer candidate. Pasparakis and coworkers [39] prepared the copoly-mer of DEGMA and N-hydroxysuccinimide methacrylate (NHS-MA), which could be modified on the surface of cells through reacting with amino groups of the membrane proteins ( Figure 3A). Similar as described previously, if the temperature is above LCST, phase transition-triggered cell aggregation could be observed. Besides, Pasparakis and coworkers prepared another copolymer of water-soluble Nvinylpyrrolidone (NVP) and 3-(acrylamido) phenylboronic acid (APBA) as target moiety, which could react with the cis-diol group on sialic acid to form boronate ester bonds. As a "glue molecule," the polymer enabled cells to adhere in just several minutes to form a spherical cell nanostructure. Moreover, the reversibility of boronate-ester bond could make cell spheroids disassemble by adding 0.01 mM glucose. One challenge of the polymer methods is the requirement for high concentrations of polymers, which may cause toxicity to cells and organisms. So, scientists developed a new copolymer that could accelerate the cell aggregation by two distinct mechanisms (diol-boronate ester-induced intercellular crosslinking and polymer-polymer hydrophobic interactions above LCST). [40] The LCST of the copolymer was between 32 • C and 34 • C, which is suitable for cell culture. As a result, only 25 μg/ml new copolymer was needed for cell aggregation control, while the minimum concentrations of the previous two copolymers were 200 μg/ml. Because of the negatively charged nature of the cell membrane, electrostatic adsorption between positively charged polymer and the cell membrane is a supplementary strategy to trigger cell aggregation. A polyethyleneimine (PEI) backbone conjugated with hydrazide groups (PEI-hy) can be an efficient "glue molecule" among cells ( Figure 3B). [41] Aldehyde residue, which is introduced onto cell surface by periodate oxidation, can react with the hydrazide group of PEI-hy and consequently mediate cell-cell connection, while neutral hydrazide cannot trigger such aggregation. Such a result indicates that the positively charged polymer may concentrate the linkers onto the negatively charged cell membrane to improve efficient covalent linkage. However, people have realized that the electrostatic force alone is not strong enough to trigger cell aggregation. Electrostatic adsorption is usually used as an auxiliary force to combine with other force to accelerate cell aggregation and improve the stability of the aggregates. For example, Mo et al. introduced an oleyl-PEG conjugated 16 arms-polypropylenimine hexadecaamine (DAB) dendrimer, which was consist of positively charged dendrimeric linker and hydrophobic cell membrane-binding moieties to rapidly stabilize cell-cell contacts. [42] They demonstrated that it was a quick and simple way to induce cell aggregation with 1 min centrifugation at 40 rcf. PEG is another famous polymer that has been widely applied in many biological systems. Recently, Hawker's group has introduced a controlled radical polymerization (CAR) strategy to link polymers on cell surface. With PEG as the example molecule, they showed that the functionalized PEG could interact with tannic acid (TA) via hydrogen bond to perform TA-triggered aggregation of PEG-modified yeast cells. [35] More generally, PEG usually works as an important linker between anchoring molecules and functional molecules such as lipid, aptamer, antibody, and bioorthogonal group. [33,43,44] Because the 20-nm-long glycocalyx on the cell surface can cause nonspecific and unavoidable steric hindrance between the cell-surface molecules and the F I G U R E 3 Polymer-induced cell aggregation. (A) Illustration of the macromolecular cell surface modification concept with copolymers of P1 and P2. P1 induces cell aggregation through intercellular diol-boronate ester formation that can be reversed by the addition of diol-rich compounds such as glucose; P2 promotes cell aggregation by covalent anchoring on the cell membrane and subsequent formation of cell aggregates via thermoresponsive coil-to-globule phase transition of the polymer above the LCST. Reproduced with permission: Copyright 2015, Royal Society of Chemistry. [39] (B) Structure of (a) positively charged PEI-hydrazide and (b) neutral hydrazide. Electrostatic adsorption assists cell aggregation. Reproduced with permission: Copyright 2007, Elsevier [41] exogenous modifying molecules, PEG is an ideal candidate to extend the functional groups from the cell surface without affecting their conformations, thus making the cell-cell interaction go smoothly. [45] Coordination interaction Ionic coordination interaction is common in the construction of nanomaterials such as hydrogels or collagen nanoassembly; now it has also been reported as a technique for inducing cell aggregation. The Caggiano group [46] successfully realized rapid cell aggregation via coordination interaction between Fe 3+ ion and maltol ( Figure 4). In their strategy, maltol was introduced by maltol-derived hydrazide, which could react with the nonnative aldehydes modifying on the cell surface by periodate mild oxidation of sialic acid residues. Subsequent addition of Fe 3+ might induce nonspecific cell aggregation by gentle agitation in 10 min since one Fe 3+ ion can coordinate with three maltol molecules to form a crosslink. However, a barrier of cell aggregation using such technology is that iron-chelating proteins such as transferrin in the serum can also coordinate with Fe 3+ ion, [47] limiting their application in vivo. F I G U R E 4 Schematic representation of chelate-mediated cell aggregation. Reproduced with permission: Copyright 2013, Royal Society of Chemistry [46] 3 BIOMOLECULES-INDUCED CELLS ASSEMBLY In the pure chemical methods, many chemical reagents are used for both modification and interconnection, which may cause negative effects on cell viability and proliferation. Moreover, chemical reactions usually lack the specificity to distinguish different cell lines in multiple cell assembly systems. In this regard, many biomolecules have been exploited as the new generation of "glue molecules." Compared to chemical molecules, biomolecules have superior performance in biological applications due to their biological compatibility and specificity. To date, numerous biomolecules have been introduced for cell engineering applications, such as biotin-streptavidin, antigen-antibody, and DNA molecules. Biotin-streptavidin system The biotin-streptavidin system is the most widely applied biomolecule pair for generating stable linkage. Streptavidin and biotin have a firmly specific combination with an extremely low dissociation constant (K d ) about 1.3 × 10 −15 M. [48] Streptavidin is a kind of protein with a total molecular weight (Mw) of 60 kD and consists of four identical subunits. Each subunit can especially interact with one biotin, a kind of vitamin molecule with a Mw of 244. That means, one streptavidin can contact with four biotins simultaneously. Considering that the biotin can be easily attached to various substances such as nucleic acid, protein, organic molecules, and so on, a stable network constructed from the multiple linkages between streptavidin and biotinylated substrates can be fabricated conveniently. These advantages make the biotin-streptavidin system widely applied in many fields including chemistry, biology, and even engineering. No doubt, such a system also attracted intense attention from people studying cell-cell interaction due to its specific affinity and good biocompatibility. Similar to the functional molecule maltol mentioned above, biotin could be introduced by biotin hydrazide, which can react with aldehyde modifying on cell surface by periodate oxidation of sialic acid residues ( Figure 5A). [49] Subsequently, the cell aggregation can be induced through the addition of streptavidin. To further simplify the operation process, scientists developed another approach for one-step biotinylation of the cell surface through the endogenous amine group, without addition of the aldehyde. In their strategy, N-hydroxyl-succinimide biotin (sulfo-NHS-LC-biotin) was employed to directly link the biotin with the amine group on the cell membrane within 10 min. Using this method, Sakai group [48] successfully achieved to adhere biotinylated cells into multicellular spheroids in a 6-well plate by shaking at 60 rpm for only 3 min ( Figure 5B). The strong binding affinity of the biotin-streptavidin system would not only make cell aggregation in a fast speed, but also make homogeneous aggregation of different types of cells possible. As an improvement of compacted and multilayered tissue-like structures formed by cell-cell connection, 3D tubular structure induced by biotin-streptavidin was also constructed using stress-induced rolling membrane (SIRM) technique ( Figure 5C). [50] The proposed 3D cell aggregate structure can mimic tubular structures in vivo such as blood vessels. This biotin-streptavidin strategy for cell aggregation has also been applied in cellular therapy. For example, the Wang group demonstrated that both of natural killer (NK) cell and Jurkat cell (T-lymphoma cell) could be biotinylated and then connect with each other using the biotin-streptavidin linkage ( Figure 5D). [51] NK cell would induce the apoptosis of Jurkat cell more effectively through proactively shortening distance, thus achieving targeted cell killing with outstanding specificity and efficiency. Moreover, scientists have indicated that such a cell-cell communication can be regulated by light manipulation in a noninvasive and remote-control manner. To achieve this, polythiophene derivative (PTP), a small molecule that can generate reactive oxygen species (ROS) under light irradiation, was selectively internalized into NK cells. It could be observed that light-induced PTP activation increased the mortality of NK cells in the biotin-streptavidininduced cell aggregation system, thus reducing the damage of NK cells to Jurkat cells in a light-dose-dependent manner. Specific binding of protein-protein or protein-peptide The biotin-streptavidin system has been well characterized and widely used in the biological study; however, it still has some deficiencies. For example, streptavidin is derived from bacteria and is a potent antigen that may induce immune response in mammalian cells. Moreover, the biotinstreptavidin association is too strong to be dissociated, which limits its application in flexible control of cell assembly. Some proteins have been proved to dominate cell aggregation naturally, but this in vivo approach is still not universally, specifically, and artificially controlled because the mechanisms remain unclear. Nevertheless, these natural pathways remind people that protein is a class of important biomolecules that should not be ignored in the application of cell-cell connection. To date, some proteins have been used to specifically target at the biomarkers on the cell membrane and thus to purposefully regulate the aggregation of particular cell populations. Antigen-antibody interaction is commonly used in various biological methods such as western blotting, enzymelinked immunosorbent assay (ELISA), and so on. Scientists have introduced various antibodies (including single-chain antibody fragment, [52] nanobody, [53] etc.) and attached them with prepared nanoparticles to recognize and bind with specific targets on cell surface. The high specificity and affinity of the antigen-antibody union provide the possibility of establishing cell-cell interaction systems through such a method. Bispecific antibody is a kind of antibody constructed by the fusion of two different antibodies, so that one bispecific antibody molecule can usually recognize and bind with two different targets, making it a potential "glue" to link different cells together. For example, CD19-CD3 bispecific antibody-Blincyto-has been used to engage cytotoxic T cells for redirected lysis of tumor cells. This strategy has been confirmed to be effective in clinic and just be approved by FDA for immunotherapy. [54] So far, many methods have been developed to connect two kinds of antibodies for further inducing cell aggregation, such as chemical crosslinking method, [44] recombinant antibody engineering, [12,55,56] F I G U R E 5 Biotin-streptavidin system used for cell modification and aggregation. (A) Reaction scheme for the biotinylation of cells via periodate oxidation. Incubation of cells with sodium periodate causes the oxidation of the vicinal diol of cell surface sialic acid residues. The resulting aldehyde groups can be used to selectively ligate biotin hydrazide to the cell surface via the formation of a hydrazone bond. Reproduced with permission: Copyright 2003, Wiley. [49] (B) Biotin-streptavidin-induced rapid formation of multicellular heterospheroid. Reproduced with permission: Copyright 2011, Elsevier. [48] (C) Schematic diagram of cell surface modification and stepwise formation of multicellular structures. Reproduced with permission: Copyright 2013, Wiley. [50] (D) Multicellular assembly of immune cells (NK-92MI) and cancer cells (Jurkat) by the chemical bottom-up approach. Reproduced with permission: Copyright 2014, Wiley [51] DNA-mediated antibodies assembly method, [57,58] and further developed multispecific antibodies. [57,59,60] Antibodies can also be directly anchored on the cell surface through lipid insertion, which is more convenient for operation comparing to the conventional bioengineering method. The Wagner group [61] has prepared chemically self-assembled nanorings (CSANs), which contain both single-chain antibody (scFv) and lipid-linking group, to recognize cell-surface receptors and thus direct cell assembly ( Figure 6A). In their design, two dihydrofolate reductases (DHFR) connecting with each other by a linker peptide can rapidly self-assemble into CSAN. This CSAN can bind tightly with lipid-linked methotrexate (bisMTX), a dimer of DHFR inhibitor. Through the recombinantly fusing scFv with DHFR, a scFv-CSAN-bisMTX-lipid construct can be obtained finally and such a structure can insert into the cell membrane. Utilizing this method, they modified the tool cells with the scFv of epithelial cell adhesion molecule (anti-EpCAM), and the modified cells could connect with EpCAM positive cells, such as MCF-7 cells, forming stable cell assembly. As an improvement, the same group [62] reported a method that mixed two kinds of fusing scFv-DHFR subunits with different targeting molecules to form a bispecific CSAN. Furthermore, they also demonstrated that the binding between bisMTX and DHFR can be broken by the addition of excess trimethoprim, a competitive inhibitor of DHFR. This is an advantage that the trimethoprim is an FDAapproved antibiotic that is low-toxicity in vivo and can be widely distributed in the whole organism, making the regulation of cell assembly and disassembly in vivo being probably. Recent studies have shown that clustered distribution of T cell receptor ligands on antigen-presenting cells (APCs) affect T cell activation in immune therapy. [63,64] A shorter distance between antibodies and cell membrane may provide better results, so people tried to find the approach to fine-tune cellcell distance for immune therapy. The Fan group [14] prepared a cholesterol-DNA-biotin linker to anchor peptide-major histocompatibility complex (pMHC) and CD28 antibody on the red blood cell (RBC) membrane as the APC. Taking advantage of the length-controllability of DNA, they found that a shorter distance between the clustered pMHCs and the cell membrane was benefit for more effective T cell activation. Thanks to the development of genetic engineering technology, people working on cell biology and molecular engineering have powerful tools to express proteins directly on cell surface now. As an example, the Notch protein has been recognized as a target. [65][66][67] The Notch protein is a transmembrane receptor protein containing three parts (extracellular ligand-binding module, intracellular transcriptional module, and central regulatory module). However, the interaction between the exposed wild-type Notch and its ligand will initiate intramembrane proteolysis (i.e., cleavage of the receptor), [68,69] so the Notch protein and its ligand cannot be applied for cell aggregation directly. The Lim group [70] discovered that the intracellular domain and extracellular domain of Notch can be replaced by heterologous F I G U R E 6 Specific binding of protein-protein or protein-peptide in cell assembly/disassembly. (A) Working principle of the introduction of single-chain antibody (scFv) by the interaction between dihydrofolate reductases (DHFR) and DHFR inhibitor methotrexate (bisMTX) as well as subsequent cell-cell assembly and disassembly. Reproduced with permission: Copyright 2014, Wiley. [123] (B) Structure and working principle of synthetic Notch receptors (synNotch) (a) and the assembly of sender and receiver cells via synNotch receptors-ligands interaction (b). Reproduced with permission: Copyright 2016, Elsevier. [70] (C) Design and characterization of a dual cell surface receptor and reporter system. Reproduced with permission: Copyright 2015, American Chemical Society. [73] (D) Schematic illustration of cell-cell interaction triggered by blue light switchable protein pairs. (a) CRY2/CIBN [75] and (b) nMag/pMag, iLID/Nano. [77] Reproduced with permission: Copyright 2019, Wiley, [75] Copyright 2020, American Chemical Society [77] amino acids sequences ( Figure 6B). Using lentivirus vectors to express the designed Notch protein in host cells, they successfully prepared synthetic Notch (synNotch) receptors and observed regulated cell-cell communication and signal transduction. As an example, a receiver cell with a syn-Notch receptor, whose extracellular domain is scFv for CD19 (anti-CD19), can be activated by connecting with CD19expressing sender cell and then trigger intracellular green fluorescent protein (GFP) expressing. Moreover, multiple syn-Notch proteins can be constructed on one receiver cell, so two or more reporter-ligand pairs can be employed to synergistically mediate the cell-cell connection. People showed that with sophisticated design, respective signaling pathways could be modulated orthogonally. Therefore, cascades of cell-cell signaling pathways could be constructed artificially via multiple synNotch proteins engineering. In the recent work of the same group, they have achieved the programme of the multidomain cell-cell aggregation using synNotch-based platform. [17] In their method, the recognition of the sender cell to the receiver cell can induce the change of cadherin, which can subsequently influence the cell-cell aggregation. This synNotch toolkit has now attracted intense attention of research in cell biology and bioengineering owing to its powerful performance. Similar to proteins, oligopeptides can also play as specific recognition segments. The Luo group [71] conjugated RGD, a peptide that was demonstrated to play as an epitope of extracellular matrix proteins such as fibronectin, [72] with the polyamindoamine (PAMAM) dendrimer, making RGD present a clustered state. Such an RGD cluster can be utilized as the "glue molecule" for regulate cell aggregation. The Yousaf group [73] has presented a method to initiate cell-cell interaction using RGD ( Figure 6C). They first prepared a bilayer membrane structure by combining calceintethered alkyl chain (calcein lipid) with other lipids (POPC and DOTAP). Then the calcein would be introduced into the target cell membrane by liposome fusion. Next, calcein could react with dabcyl hydrazide, showing a state of fluorescence quenching caused by the linkage. Finally, a ligand exchange reaction would be activated when an oxyamine-tethered new ligand was introduced in the system. The oxyamine group could react with the calcein and replace the dabcyl hydrazide group, thus linking the new ligand on the cell surface. Besides RGD, other oligopeptides were also successfully used for cell aggregation. For example, the adhesive peptide IKVAV was conjugated to PAMAM to act as the "glue" for cell aggregation, too. [74] People reported that IKVAV-PAMAM had an enhanced service capability than RGD-PAMAM, probably because the modification of IKVAV provided a dendrimer scaffold with greater hydrophilicity. Moreover, people also demonstrated that this multivalent adhesive conjugate could enhance cell proliferation and expression compared to the cells treated with monovalent ligands. It means the peptidebased "cell glue" is a potential tool for the construction of multicellular structures with high efficiency. Similar to those nonbiological methods, one challenge maintaining in the application of the proteins or peptides is also to make the aggregation reversible and programmable. Recently, several light switchable protein pairs were developed in cell-cell interaction. The Wegner group successfully established cell-cell connection via selected protein pairs such as CRY2 (cryptochrome 2) and its interaction protein CIBN (N-terminal of Cry-interacting basic helixloop-helix protein 1) ( Figure 6D). [75] It took just a few minutes for CRY2-CIBN to assemble under irradiation of 480 nm light and disassemble in dark. [76] When CRY2 and CIBN were expressed in two groups of MDA-MB-231 cells (MDA), respectively; the CRY2-MDA and the CIBN-MDA could assemble rapidly once they found each other, forming observed cell clusters or multilayered 3D structures. Further, they demonstrated that such a method can trigger cell assembly in a noninvasive (blue light-mediated), repetitive, and reversible way. As an improvement, the Wegner group introduced the application of other light-responsive protein pairs including iLID and its interaction protein Nano (iLID/Nano), nMag and its interaction protein pMag (nMag/pMag), as well as nMagHigh and its interaction protein pMagHigh (nMagHigh/pMagHigh) ( Figure 6D). [77] Considering these protein pairs had a decreasing dark reversion rate in turn, the researchers evaluated the effect of dynamics on the sizes and shapes of the cell clusters that were constructed via the formation of these protein pairs. Using pulsed illumination with different frequencies to change the dynamics, they indicated that compact and round cell aggregates would be obtained under thermodynamic control, while branched and loose cell aggregates would be obtained under kinetic control. Moreover, self-sorting of the cell clusters with different sizes and shapes could be realized. Once they mixed four different cell types together, two separated cell assemblies, such as iLID-/Nano-MDA cells and nMag-/pMag-MDA cells, could be obtained. DNA Proteins have provided promising tools in biology and cell engineering. However, there are still some challenges for the accurate manipulation of cell aggregation. For example, the screening and synthesis of antibodies and peptides, particu-larly the bispecific antibodies, is time consuming and high cost. What's more, the prepared exogenous antibodies tend to show strong immunogenicity, thus limiting their applications in living body. For these reasons, more suitable biomaterials for cell assemble are still in urgent need. DNA was previously known as biomolecule that was used to store genetic information. James Watson and Francis Crick first suggested the highly stable and specific doublehelix model of double-stranded DNA more than six decades ago. [78] Nowadays, this classic structure has widely considered to be the basic character of the complicated DNA nanostructures. In the 1980s, Seeman's pioneering work [79] in the area of DNA nanotechnology assigned DNA as biomaterial usage. Since then, DNA has been recognized as a kind of material similar to polymers. Thanks to the development of DNA nanotechnology, highly specific self-recognition and programmability of DNA molecules have been successfully used for creating considerable DNA nanostructures with predesigned structures and DNA nanomachines with customized functionality. There are numerous styles of DNA nanostructures such as tetrahedron, [80] cubic, [81] hairpin-like molecular beacon, [82] nanotube, [83] nanoflower, [84] hydrogel, [85] and DNA-nanoparticle superstructures [86] that have been synthesized successfully. These DNA-based structures have been widely used in different fields, including nanomaterial synthesis, molecular motor, biosensing, bioimaging, drug delivery, and so on. [87][88][89][90][91][92] To date, many studies have been carried out to establish cell-cell connection systems based on the special DNA nanostructures. It is worth to mention that although the exogenous DNA molecules have been noticed to initiate the immunoresponse through cGAS-STING pathway, the DNA strands applied in DNA nanotechnology now are still too short (∼100 bp) to show obvious immunoresponse. [93] When people try to use DNA as the "glue molecule" for cell-cell aggregation, the first challenge is the immobilization of DNA molecules on cell surface. Currently, the commonly used methods include the introduction of DNA aptamers and the hydrophobic modification of DNA strands. Noticeably, the aptamer is a kind of special DNA molecule, which has a certain sequence and fold into specific spatial structure that can specifically recognize a target varying from small molecules, macromolecules to even entire cells. [94][95][96] Once the target is selected, the aptamer with high recognition specificity and binding affinity to the target can be screened based on the technology of Systematic Evolution of Ligands by Exponential Enrichment (SELEX). [97] In addition to the specificity and affinity required for diagnostic [98] and therapeutic applications, [99] aptamers show more attractive properties compared to antibodies. They are usually smaller and more stable, resulting in better tissue penetration ability. The synthesis of DNA molecules is commercial now with a lower cost. Moreover, the nucleic acids can be readily adapted for modifications to meet different needs. Taking these advantages, the Chang group [100] designed a dimerized five-point-star DNA nanostructure to link two tetravalent aptamers TE02 and LD201t1, which can respectively target Jurkat cell and Ramos cell ( Figure 7A). They utilized the designed nanostructure to achieve the interaction between two cells and proved that the rigid and multivalent aptamers could offer more robust binding than the flexible and monovalent one. Such a DNA nanostructure needs to [45] (C) Schematic illustration of cell-cell attachment through DNA hybridization between complementary ssDNA-PEG-lipids incorporated into the outer cell membranes. Reproduced with permission: Copyright 2013, Elsevier. [102] (D) Stepwise anchoring of fatty acid (FA)-modified ssDNA on cell membranes and subsequent cell assembly via DNA hybridization. Reproduced with permission: Copyright 2014, American Chemical Society [107] search and anchor two different cells simultaneously. Therefore, if the number of targets on the cell surface is not sufficient, the decreased binding efficiency might sharply limit the utility for triggering cell-cell interaction. The Tan group [45] reported another way to induce the cell-cell aggregation with high efficiency ( Figure 7B). They first synthesized a complex of diacyl lipid-PEG-aptamer. Then, the lipid tail of the complex could firmly insert into the cell membrane through its hydrophobicity, so that the aptamer TD05 could be directly anchored on the surface of cytomegalovirus (CMV)-specific CD8 + cytotoxic T lymphocyte (CTL). The aptamer-modified CTL might work as an immune effector cell to target Ramos cell because the TD05 could specifically recognize the immune globulin heavy mu chain on the surface of Ramos. Finally, they demonstrated that such cellcell aggregation might trigger Ramos cell killing, thus providing a redirected cell killing method mediated by aptamers. Similarly, scientists have also proved that the aptamers could be introduced to T cell surface by metabolic glycoengineering and click chemistry. [101] This nonviral aptamer-replacing CD19 chimeric antigen receptor (CAR) T (CAR-T) strategy might be further used in immunotherapy in mouse tumor models. While aptamers with high specificity have advantageous performance in recognizing target cells for cell assembly, they still have to be screened through a complex and rigorous process. Directly anchoring different single-stranded DNA (ssDNA) oligonucleotides on different cell surfaces might be a more straightforward strategy. As mentioned above, ssDNA can be easily modified with other functional groups and molecules, so inserting lipid into cell membrane via the hydrophobic interaction is one of the most commonly used methods to anchor ssDNA on cell surface. Teramura and coworkers [102] synthesized a complex of ssDNA-PEG-lipid ( Figure 7C). Through the hydrophobic lipid, ssDNA could anchor on the cell surface. When cells modified with complementary DNAs met each other, cell-cell attachment would be built. This method is available for not only homogeneous but also heterogeneous cell-cell interactions. [103] People have proved that the more ssDNA-PEG-lipid are inserted into the cell membranes, the larger interface between two cells will participate to trigger the stronger the cell-cell interaction. For E-cadherin-expressing cells, the cell-in-cell invasion process (the normal cell MCF-10A was internalized into the cancer cell MCF-7) was observed especially when ssDNA was in a high ratio. In fact, the binding strength via hydrophobicity of lipid is not very robust, because cell membrane fluidity and continual refreshment may cause lipid detachment from the membrane or cellular internalization. [104] Thus, improving the stability of lipid insertion is necessary. People tried to use lipids with different lengths as the anchors, and it has been proven that the C 16 dialkylphosphoglyceridemodified ssDNA has a more stable anchor ability on cell membrane than C 18 dialkylphosphoglyceride-modified ssDNA. [105] Another common feature is that the dialkylmodified ssDNA has a more stable anchor ability than monoalkyl-modified ssDNA. In recent years, the Gartner group [106] have made a great progress in improving anchoring stability of monoalkyl fatty acid-modified ssDNA (Figure 7D). In their strategy, two complementary ssDNA linked with fatty acid were added step by step. The first strand would anchor on the lipid bilayer but remained in relatively rapid equilibrium with the medium as well as the second coanchor strand. Because of the membrane fluidity, the two anchored ssDNA strands could meet and hybridize with each other, resulting in overall improvement of the hydrophobicity and the binding strength. In their design, the coanchor strand was shorter than the first anchor strand, thus the longer anchor strand could further hybridize with the other complementary anchor strand anchored on the other cell to construct cellcell aggregation. With such a DNA-programmed assembly of cells (DPAC) method, a bottom-up cell assembly strategy was developed to build a 3D tissue by the same group. [107] In their design, the ssDNA was patterned onto a glass slide through covalent linkage. Cells modified with complementary ssDNA were introduced through a flow cell over the modified glass slide with programmable cell concentration and flow rate. Through DNA hybridization, the DNA-modified cells would then be anchored on the substrate surface. After repeating such flow cell incubation process, 3D microtissue structure was formed through the aggregation of cells layer by layer via DNA linkage. Further, by merging the DPAC method with microfluidic and 3D printing technologies, they demonstrated that this strategy performed very well in the synthesis of cell aggregation structures. A challenge of ssDNA-based cell aggregation is building cell clusters with asymmetric or directional arrangements, which is limited by the diffusivity of ssDNA on fluidic cell membranes. To improve the anchoring stability on the cell membrane, DNA frameworks have been exploited as anchoring modules for cell modification. Taking advantage of their rigid construction and larger size, 3D DNA nanostructures exhibit much slower endocytic kinetics once anchoring on the cell surface compared to ssDNA. Thus, they can stay on the cell membrane for a longer time for further reactions. Tan group [108] has presented a strategy using DNA tetrahedron structure as the "glue" of cell aggregation ( Figure 8A). One of the four vertices of the DNA tetrahedron was applied to form the connection between cells, the remaining three vertices were all used to immobilize on the cell surface through cholesterol modification. It could be observed that DNA tetrahedron with three cholesterols shows a significantly improved stability compared to DNA tetrahedron with only one cholesterol. The DNA tetrahedron with three cholesterols also perform better in membrane anchoring than ssDNA with three cholesterols modification, demonstrating that tetrahedral scaffold was an important factor for enhancing the membrane-anchoring stability. Through the cell-cell connection mediated by DNA tetrahedron, the signal response between Raji B cell and A549 cell under lipopolysaccharide (LPS) stimulation could be detected. Recently, they designed an improved DNA tetrahedron structure, which was first blocked by a special aptamer sequence so that the cell-cell interaction could not be initiated. In the presence of target molecules such as ATP, the block sequence would be removed by allosteric modulation. Therefore, the exposed DNA tetrahedron arms could induce the cell-cell interaction. [109] Our group introduced another improved DNA tetrahedron linker for programmed cell-cell aggregation. [110] Metastable DNA hairpins were linked to the vertices of DNA tetrahedron, and the cell-cell aggregation could be triggered once a trigger strand was added into the system to initiate hybridized chain reaction (HCR), which could link the DNA tetrahedrons on cell surface together. The results demonstrated that increasing the number of the modified hairpins would induce larger cell assembly. Although anchoring DNA on cell surface through hydrophobic insertion is easy to operate, a covalent connection between the DNA oligonucleotides and the cell membrane is always desired. On the one hand, the covalent chemical bonds are more robust. On the other hand, the covalent method may provide the more specific link between DNA and cells. To date, DNA modification by various functional groups has been commercialized, so that the metabolic glycoengineering and click chemistry methods can be applied to introduce ssDNA on cell membrane with a simple reaction process. The Bertozzi group [111] has proved that the control of cell-cell contact could be highly dependent on DNA sequence complexity, density, and total cell concentration, so a programmable cell assembly can produce desirable microtissue step by step. A similar mechanism has also been proved by the Song group, who used the DNA-mediated cell-cell connection to promote stemness maintenance and expansion of stem cells for the first time. [112] Generally, disassembly of DNA-dependent cell aggregates can be accomplished by adding DNase or increasing temperature over the melting temperature of the DNA duplex. [111] With the development of DNA technology, more and more special characters of DNA have also been applied to control the aggregation of cells. Hou et al. reported a cytosinerich DNA triplex platform that could regulate the assembly of cells by controlling pH. [113] In recent years, Lu group has engineered DNAzymes on the cell surface ( Figure 8B). [114] DNAzymes are special DNA sequences that can perform catalytic function toward DNA/RNA cleavage using specific metal ions as cofactors. So, they built a platform that could regulate dynamic cell behaviors using different metal ions. The powerful amplification capability of DNA also makes it promising material for cell aggregation applications. Polymerase chain reaction (PCR) is the most famous DNA amplification method, but it needs extreme temperature conditions. [115] Due to the development of molecular biotechnology, DNA molecules can be amplified in cellular conditions and in situ now. By introducing HCR amplification technique, Wang group [116] achieved the branch-like amplification of DNA on the cell surface ( Figure 8C). In their strategy, DNA initiator was first introduced by metabolic glycoengineering and click chemistry. Using the special design, the branched DNA structure produced by HCR contained many extended unhybridized ssDNA parts, whose sequences were complimentary to those of the ssDNA parts on other cells. This polyvalent DNA hybridization pathway would trigger a more effective and robust cell-cell connection which could even be observed over 20 days during the culture. What's more, one important point in this method is that only the [108] (B) DNAzyme controlled cell-cell interaction and two-factor disassembly control of cell assemblies by DNAzymes. Reproduced with permission: Copyright 2021, American Chemical Society. [114] (C) Schematic illustration of in situ formation of polyvalent DNA polymers on the cell surface. Reproduced with permission: Copyright 2018, Wiley. [116] (D) Scheme of DNA origami nanostructure-based organization of cell origami clusters. Reproduced with permission: Copyright 2020, American Chemical Society [119] initiator DNA strand can be settled on the cell surface. The reduced occupation of the cell membrane will minimize the effects of cell surface modifications on cells. In most recent years, the same group has developed their strategy to construct a branched structure containing multiple aptamers on the cell surface. Such a structure could perform as a polyvalent antibody mimic (PAM) to initiate the aggregation of different cells. [117] The improvement of DNA nanotechnology has provided more and more promising DNA nanostructures as tools for manipulating cell assembly. For example, the structures constructed by DNA origami technology have been shown to be powerful tools to engineer cell surface functions. [118] Fan group [119] introduced a DNA origami nanostructure (DON) to organize the aggregation between homo-/hetero-type cells ( Figure 8D). The structure, which was termed as Janus DON, contained four tube-like subunits and was modified with numerous ssDNA strands on the surface. The target cells were modified with the complementary ssDNA, which were premodified with thiol groups, through the crosslink between the thiol and the amino groups on the cell membrane. By constructing different types of DONs, people demonstrated that different types of intercellular communications could be manipulated by DONs. Lately, new designs such as DNA hairpin motif have also been introduced to manipulate cell-cell interactions. [120] CONCLUSION AND OUTLOOK The development of programmable cell-cell aggregation provides not only a useful method for investigating the mecha-nism of cell interaction in the cell biology field but also a promising tool for building artificial tissues in the bioengineering field. During the past decades, numerous chemical or biological pathways have been presented to achieve the goal of precisely controlling cell-cell aggregation, the characteristics of which were summarized in Table 1. Immobilizing the "glue molecules" on the cell membrane is the first vital process. Although the anchoring via hydrophobic tails insertion is a convenient way, the stable immobilization through covalent bonds may always provide a more robust system. Bioorthogonal chemistry strategies make the linkage between the "glue molecules" and cell surfaces being efficient with the help of bioengineering techniques. Moreover, the bioorthogonal strategy can also help people to overcome the other important barrier in cell-cell aggregation programming, that is, how to manage the adhesion between the "glue molecules." Besides these pure chemical methods, biomolecules have been also applied to build stable linkage between cells to physiologically mimic what happened in vivo. In this review, we summarize the recent advances in the molecule engineering of cell assembly and mainly discuss the mechanism of chemical and biological methods. As we know, the study of human tissues and diseases is often hindered by difficulty with the construction of in vivo models which maintain differences to the in vitro ones. [121,122] Studies in cell assembly can effectively deal with this problem via building materials for in vivo tissue repair or for constructing realistic in vitro tissue models. Beyond this, the selective assembly of multifarious cell types, such as tumor cells and immune cells, can make vast differences in immunotherapy. The tumor cells can be directionally killed by immune cells via accurately Easy (DNA is readily adapted for various modifications.) The interaction between the DNA aptamers and the target proteins may influnce their regular physiological functions. [45,[100][101][102][103][105][106][107][108][109][110][111][112][113][114][116][117][118][119][120] controlling cell-cell interaction. What's more, molecular engineering potentially improves our understanding of the signaling mechanisms of cellular behavior and the regulation criteria for programing cells toward desired outcomes. There still remains multiple challenges to be addressed to improve the controllable assembly between multiple populations of cells. (i) It is necessary to systematically study and reduce any negative impacts of cell-surface-loaded materials and underlying processes on cellular function. (ii) Some of the reported cell-cell aggregation processes have to maintain at a relatively low temperature to slow down the fluidity of cell membrane to avoid the endocytosis and loss. So, more stable surface immobilization methods are still necessary. (iii) Nongenetic cell engineering technologies are still in their infancy and none have been translated into the clinic to date, and studying the advanced technology in vivo and in clinic is significant. Obviously, these demands probably cannot be fixed by only the chemists or biologists themselves. The cooperation between experts in different fields, such as bioengineering, molecular biology, cell biology, chemical synthesis, nanotechnology, and so on, becomes more and more important to achieve a big goal to improve the programmed cell-cell aggregation as a practical tool. In the future, precise engineering of cell surface will make it possible to manipulate specific and controllable cell aggregation between different cell types as expected with satisfy efficiency and applicability, which has immense potential in tissue engineering, organ reconstruction, cell-based diagnostics, and therapeutics. A C K N O W L E D G M E N T S This work was supported by the National Natural Science Foundation of China (No. 22074068, 591859123 and 21874075) and "the Fundamental Research Funds for the Central Universities," Nankai University (63211050). C O N F L I C T O F I N T E R E S T There are no conflicts to declare. D ATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author(s) upon reasonable request.
2022-01-22T16:42:06.060Z
2022-01-20T00:00:00.000
{ "year": 2022, "sha1": "ddae3979111746afd8eabad3cc8a50b241de978b", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/agt2.166", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "2b5aa6e90864792742f0bc91611df40d8c570e06", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
204202979
pes2o/s2orc
v3-fos-license
Mini-BFSS matrix model in silico Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. I. INTRODUCTION This paper concerns itself with the supersymmetric quantum mechanics of three bosonic SUðNÞ matrices and their fermionic superpartners. The model in question, introduced in [1][2][3], has four supercharges and describes the low energy effective dynamics of a stack of N wrapped D-branes in a string compactification down to 3 þ 1 dimensions. When the compactification manifold has curvature and carries magnetic fluxes, the bosonic matrices obtain masses [3]. When the compact manifold is Calabi-Yau and carries no fluxes, the matrices are massless. This theory has flat directions whenever the matrices are massless, and hence is a simplified version of the Banks-Fischler-Shenker-Susskind (BFSS) matrix model [4], which, for the sake of comparison, has nine bosonic SUðNÞ matrices and 16 supercharges and describes the non-Abelian geometry felt by D particles in a noncompact (9 þ 1)-dimensional spacetime. We hence dub the model studied here: mini-BFSS (or mini-BMN [5], for Berenstein-Maldacena-Nastase, in the massive case). The Witten index W I has been computed for mini-BFSS [6][7][8][9][10] and vanishes, meaning that the existence of supersymmetric ground states is still an open question. Even the refined index, twisted by a combination of global symmetries and calculated in [9], gives us little information about the set of ground states due to the subtleties associated with computing such indices in the presence of flat directions in the potential. This is in stark contrast with the full BFSS model, whose Witten index W I ¼ 1, implying beyond doubt the existence of at least one supersymmetric ground state. The zero index result for mini-BFSS has led to the interpretation that it may not have any zero energy ground states [6,10], and hence no holographic interpretation. The logic being that, without a rich low energy spectrum, scattering in mini-BFSS would not mimic supergraviton scattering in a putative supersymmetric holographic dual [11]. Of course a vanishing W I does not confirm the absence of supersymmetric ground states-as there may potentially be an exact degeneracy between the bosonic and fermionic states at zero energy. We weigh in on the existence of supersymmetric states in mini-BFSS by solving the Schrödinger equation numerically for the low-lying spectrum of the N ¼ 2 model, in the in silico spirit of [12]. To deal with the flat directions we numerically diagonalize the Hamiltonian of the massdeformed mini-BMN matrix model, for which the flat directions are absent, and study the bound state energies as a function of the mass. A numerical analysis of mini-BFSS can also be found in [13,14], which use different methods. What we uncover is quite surprising. As we tune the mass parameter m to zero, we find evidence for four supersymmetric ground states, two bosonic and two fermionic, which cancel in the evaluation of W I . This result seems to agree with plots found in [13,14]. It must be said that our result does not constitute an existence proof for supersymmetric threshold bound states in the massless limit, but certainly motivates a further study of the lowlying spectrum of these theories. We reiterate that the Witten index of mini-BFSS vanishes at any N, and while we have only given evidence for the existence of zeroenergy states at N ¼ 2, this encourages us to determine if a similar cancellation between bosonic and fermionic states occurs for N ≫ 2. If this turns out to be the case, it would provide a new addition to the exceedingly small list of matrix models with a potential holographic limit at large N. The organization of the paper is as follows: in Sec. II we present the supercharges, Hamiltonian and symmetry generators of the mini-BMN model for arbitrary N. In Sec. III we restrict to N ¼ 2 and give coordinates in which the Schrödinger equation becomes separable. In Sec. IV we provide our numerical results, and in Sec. V we derive the one-loop effective theory on the moduli space in the massless theory. We conclude with implications for the large-N mini-BFSS model in Sec. VI. We collect formulas for the Schrödinger operators maximally reduced via symmetries in Appendix A and compute the one-loop metric on the Coulomb branch moduli space in Appendix B. A. Supercharges and Hamiltonian Let us consider the supersymmetric quantum mechanics of SUðNÞ bosonic matrices X i A and their superpartners λ Aα . The quantum mechanics we have in mind has four supercharges 1 : The parameter m is simply the mass of X i A . The massless version of this model was introduced in [1] and can be derived by dimensionally reducing N ¼ 1, d ¼ 4 super Yang-Mills to the quantum mechanics of its zero modes. The mass deformation was introduced in [3] and can be obtained from a dimensional reduction of the same gauge theory on R × S 3 . We direct the reader to [2,3] for an introduction to these models. This quantum mechanics should be thought of as a simplified version of the BMN matrix model [5] (mini-BMN for brevity). The massless limit should then be thought of as a mini-BFSS matrix model [4]. The lowercase index i ¼ 1; …; 3 runs over the spatial dimensions (in the language of the original gauge theory), and the uppercase index A ¼ 1; …; N 2 − 1 runs over the generators of the gauge group SUðNÞ. The σ i are the Pauli matrices, and greek indices run over α ¼ 1, 2. In keeping with [1] ð2:3Þ and f ABC are the structure constants of SUðNÞ. The gauginos obey the canonical fermionic commutation relations fλ Aα ;λ β B g ¼ δ AB δ β α , and hence the algebra generated by these supercharges is [3] with Hamiltonian The operators G A and J k appearing in the algebra are, respectively, the generators of gauge transformations and SOð3Þ rotations. These are given by In solving for the spectrum of this theory, we must impose the constraint G A jψi ¼ 0; ∀ A. In the above expressions, whenever fermionic indices are suppressed, it implies that they are being summed over. Let us briefly note the dimensions of the fields and parameters in units of the energy ½E ¼ 1. These are ½X ¼ −1=2, ½λ ¼ 0, ½g ¼ 3=2, and ½m ¼ 1. Therefore, an important role will be played by the dimensionless quantity ν ≡ m g 2=3 : ð2:7Þ We consider here the mass deformed gauge quantum mechanics because, in the absence of the mass parameter m, the classical potential has flat directions (see Fig. 1). Turning on this mass deformation gives us a dimensionless parameter ν, to tune in studying the spectrum of this theory, and allows us to approach the massless limit from above. B. Symmetry algebra Let us now give the symmetry algebra of the theory. The components of ⃗ J satisfy There is an additional Uð1Þ R generator R ≡λ A λ A which counts the number of fermions. It satisfies 1 Spinors and their conjugates transform, respectively, in the 2 and2 of SOð3Þ. Spinor indices are raised and lowered using the Levi-Civita symbol ϵ αβ ¼ −ϵ αβ with ϵ 12 ¼ 1. Thus in our conventions The Hamiltonian also has a particle-hole symmetry, where ϵ αβ is the Levi-Civita symbol. This transformation leaves the Hamiltonian invariant but takes R → 2ðN 2 − 1Þ − R and effectively cuts our problem in half. One peculiar feature of the mass deformed theory is that the supercharges do not commute with the Hamiltonian as a result of the vector ⃗ J appearing in (2.4). It is easy to show that Thus, acting with a supercharge increases/decreases the energy of a state by AE m 2 . This is a question of R frames, as discussed in [3]. Essentially we can choose to measure energies with respect to the shifted Hamiltonian H m ≡ H þ m 2 R, which commutes with the supercharges, and write the algebra as ð2:12Þ C. Interpretation as D particles The ν → 0 limit of this model can be thought of as the world volume theory of a stack of N D-branes compactified along a special Lagrangian cycle of a Calabi-Yau threefold [2]. The X i A then parametrize the non-Abelian geometry felt by the compactified D particles in the remaining noncompact (3 þ 1)-dimensional asymptotically flat spacetime. The addition of the mass parameter corresponds to adding curvature and magnetic fluxes to the compact manifold [3], changing the asymptotics of the noncompact spacetime to AdS 4 . This interpretation was argued in [3,15] and passes several consistency checks. Hence we should think of the mass deformed theory as describing the nonrelativistic dynamics of D particles in an asymptotically AdS 4 spacetime and the massless limit as taking the AdS radius to infinity in units of the string length. To be more specific, it will be useful to translate between our conventions and the conventions of [3]. One identifies m ¼ Ω, g 2 ¼ 1=m v , fX; λg us ¼ m 1=2 v fX; λg them in units where the string length l s ¼ 1. Reintroducing l s , this dictionary implies that g 2 ¼ g s =l 3 s ffiffiffiffiffi ffi 2π p , with g s the string coupling, gets set by a combination of the magnetic fluxes threading the compact manifold, and similarly l AdS ≡ 1=m gets set by a combination of these magnetic fluxes and the string length. For AdS 4 × CP 3 compactifications dual to the Aharony-Bergman-Jafferis-Maldacena (ABJM) theory this was worked out in detail in [3,16], and they identify ð2:13Þ where k and N are, respectively, integrally quantized magnetic 2-form and 6-form flux. In this example taking ν ¼ ffiffiffiffiffi ffi 2π p ðk 2 =NÞ 1=3 → 0 while keeping g s fixed takes the AdS radius to infinity in units of l s . The main focus of the next sections is on whether this stack of D particles forms a supersymmetric bound state, particularly in the ν → 0 limit. There the Witten index W I ≡ Tr H fð−1Þ R e −βH g has been computed [6][7][8][9][10] and evaluates to zero. This is in contrast with the full BFSS matrix model, whose index is W I ¼ 1, confirming the existence of a supersymmetric ground state. We will use the numerical approach of [12] and verify if supersymmetry is preserved or broken in the SUð2Þ case. We find evidence that supersymmetry is preserved in the ν → 0 limit, and that there are precisely four ground states contributing to the vanishing Witten index. A. Polar representation of the matrices We are aiming to solve the Schrödinger problem H m jψi ¼ E m jψi. We will not be able to do this for arbitrary N, and from here on we will restrict to gauge group SUð2Þ for which the structure constants f ABC ¼ ϵ ABC . In this case the wave functions depend on 9 bosonic degrees of freedom tensored into a 64-dimensional fermionic Hilbert space. It is thus incumbent upon us to reduce this problem maximally via symmetry. To do so, we exploit the fact that the matrices X i A admit a polar decomposition as follows: ð3:2Þ and ½L i jk ≡ −iϵ ijk are the generators of SOð3Þ. The diagonal matrix represents the spatial separation between the pair of D-branes in the stack. The φ i and ϑ i represent the (respectively, gauge-dependent and gauge-independent) Euler-angle rigid body rotations of the configuration space. This parametrization is useful because the Schrödinger equation is separable in these variables, as we show in Appendix A. The metric on configuration space can be reexpressed as ð3:4Þ The angular differentials are the usual SUð2Þ Cartan-Maurer differential forms defined as follows: The volume element used to compute the norm of the wave function is To cover the configuration space correctly, we take the new coordinates to lie in the range [17] The generators of gauge transformations G A and rotations J i are given in (2.6). These satisfy To label the SUð2Þ gauge × SOð3Þ J representations of the wave functions, it is useful to define the "body fixed" angular momentum and gauge operators ⃗ Unlike the generators of angular momentum, ⃗ P is not conserved. However, as we explain in Appendix A, it is still useful for separating variables. Let us give expressions for the bosonic parts of ⃗ J and ⃗ P, which we call ⃗ J and ⃗ P, respectively, in terms of the angular coordinates. These are ; ð3:11Þ ; ð3:12Þ and ; ð3:14Þ ; ð3:15Þ Similarly let us define G A and S A as the bosonic parts of the G A and S A operators. The G A are related to the J i by replacing ϑ i → φ i . It is easy to guess that the S A are then related to the P i via the same replacement. We are now ready to give expressions for the momentum operators and the kinetic energy operator in terms of the new variables. These are [18] It is also straightforward to write down the bosonic potential V in terms of the new variables: As expected it is independent of the angular variables. We have depicted constant potential surfaces in Fig. 1. Apart from the coordinates x a the following nonlinear coordinates will often appear in the equations below: ð3:20Þ With these definitions the kinetic term can be written as Notice that the term P 3 a¼1 y a P a2 is the kinetic energy of a rigid rotor with principal moments of inertial y −1 a . Unlike the c ¼ 1 matrix model, the angular-independent piece of the kinetic term cannot be trivialized by absorbing a factor of ffiffiffi ffi Δ p into the wave function [19]. Instead, we have and its appearance in the Schrödinger equation acts as an attractive effective potential between the x a . B. Gauge-invariant fermions Because the operators G A in (2.6) have a nontrivial dependence on the gauginos λ Aα , it is not sufficient to suppress the wave function's dependence on gauge angles φ i entirely. Instead, we can write down a set of gaugeinvariant fermions that will contain the entire dependence on the gauge angles [20]: These satisfy fχ Aα ;χ β B g ¼ δ AB δ β α , but no longer commute with bosonic derivatives. Definingσ i β α ≡ M ji σ j β α , we can now write the supercharges in terms of the new parametrization. These are where we have put the gauge-invariant fermions to the left so as to remind the reader that the bosonic derivatives are not meant to act on them in the supercharges. The Hamiltonian H (not H m ) in the new parametrization is IV. NUMERICAL RESULTS To calculate the spectrum of the Hamiltonian (3.25), we must reduce our problem using symmetry; that is, we should label our states via the maximal commuting set of conserved quantities: H m , J 3 , ⃗ J 2 , R. Because of the discrete particle-hole symmetry (2.10) we need only consider R ¼ 0; …; 3. In Appendix A we construct gaugeinvariant highest-weight representations of SOð3Þ J in each R-charge sector. This means we fix the wave functions' dependence on the angles ϑ i and φ i and provide the reduced Schrödinger operators that depend only on x a . 2 Our numerical results for the lowest energy states of H m for each R and j are presented in Fig. 2 and were obtained by inputting the restricted Schrödinger equations of Appendix A into Mathematica's NDEIGENVALUES command, which uses a finite element approach to solve for the eigenfunctions of a coupled differential operator on a restricted domain. We have labeled each row by the fermion number R and each column by the SOð3Þ J highest weight eigenvalue j (i.e., ⃗ J 2 jψi ¼ jðj þ 1Þjψi and J 3 jψi ¼ jjψi). A few comments are in order: (1) The most striking feature of these plots is the seeming appearance of zero energy states for ðR; jÞ ¼ ð2; 0Þ and ðR; jÞ ¼ ð3; 1=2Þ as ν → 0. Since the Witten index W I ¼ 0, and since the states in the (2,0) and (3, 1=2) sectors seem to have nonzero energy for any finite ν, it must be the case that these states are elements of the same supersymmetry multiplet. This must be so for the deformation invariance of W I . (2) Since we know, by construction, that the lowest energy ðR; jÞ ¼ ð2; 0Þ and ðR; jÞ ¼ ð3; 1=2Þ states are related by supersymmetry, we can use the difference in their numerically obtained energies as a benchmark of our numerical errors. Obtaining the ðR; jÞ ¼ ð2; 0Þ ground state energy required solving a coupled Schrödinger equation involving 15 functions in 3 variables. For the ðR; jÞ ¼ ð3; 1=2Þ state, the number of functions one is numerically solving for jumps to 40. In the latter case, it was difficult to reduce our error (either by refining the finite element mesh or by increasing the size of the domain) in a significant way without Mathematica crashing. This is despite the fact that we had 12 cores and 64 Gb of RAM at our disposal. In Fig. 3 we plot the percentage error in the H m energy difference between these two states as a function of ν. We find that the energy difference between these states is around 13% of the total energy as a function of ν. For comparison, we also do this for the lowest ðR; jÞ ¼ ð0; 0Þ and ðR; jÞ ¼ ð1; 1=2Þ states, where the numerics are more reliable as a result of solving a much simpler set of equations. There the difference between the computed energies is at most 2%. (3) Our results suggest that there are four supersymmetric states, two of which are bosonic and two of which are fermionic, which would cancel in the evaluation of the index. Explicitly, the two bosonic states are j ¼ 0 singlets in the R ¼ 2 and R ¼ 4 sectors (recall the discrete particle hole symmetry of the theory) and the two fermionic states are the j ¼ 1=2 doublet in the R ¼ 3 sector. It is interesting to note that there are not more states in this multiplet; for example, numerically studying the ðR; jÞ ¼ ð0; 1Þ sector reveals no evidence for a supersymmetric state in the ν → 0 limit. (4) The massless SUð2Þ model was studied using a different numerical approach in [13,14], and their plots for the ground state energies seem to approach ours, particularly Figs. 2 and 5 of [14]. (5) Our numerical evidence for these supersymmetric states does not constitute a proof since we will never be able to numerically resolve if this state has exactly zero energy. However, the result is highly suggestive of a supersymmetry preserving set of states at ν ¼ 0, and there is no contradiction with the analytically obtained Witten index result W I ¼ 0. It would be interesting to analyze the existence of these states analytically in future work. FIG. 2. Lowest energy eigenvalue for R ¼ f0; 1; 2; 3g and j ¼ f0; 1=2g as a function of ν. Each row corresponds to a different value of R up to 3, and the columns are labeled by j ¼ 0 or j ¼ 1=2. Note that for ν ¼ 0 there are E ¼ 0 energy eigenstates in both the R ¼ 2 and R ¼ 3 sectors of the theory. This implies the existence of four supersymmetric ground states at ν ¼ 0, a fermionic j ¼ 1=2 doublet in the R ¼ 3 sector, and two bosonic j ¼ 0 singlets in the R ¼ 2 and R ¼ 4 sectors. V. EFFECTIVE THEORY ON THE MODULI SPACE OF THE SU(2) MODEL To get a better handle on the previous section's numerical results, we will now study the ν → 0 limit of the matrix model analytically. Since the full problem is clearly quite difficult even for N ¼ 2, we will study the massless model in some parametric limit. This is possible because the theory has a moduli space 3 -a flat direction where the D-branes can become well separated, and along this moduli space certain fields become massive and can be integrated out. We will parametrize this moduli space by the coordinates ðx 3 ; ϑ 2 ; ϑ 1 Þ and will henceforth label them ðx 3 ; ϑ 2 ; ϑ 1 Þ → ðr; θ; ϕÞ for the remainder of this section. The parametric limit we will take is the limit of large r. To derive the effective theory along the moduli space, we will first take ðr; θ; ϕÞ to be slowly varying and expand H ¼ H ð0Þ þ H ð1Þ þ Á Á Á in inverse powers of the dimensionless quantity gr 3 . We will compute the effective Hamiltonian in perturbation theory by integrating out the other fields in their ground state, in which ðr; θ; ϕÞ appear as parameters. Similar analysis to this was performed in [20][21][22][23][24][25]. Defining ⃗ ∂ ≡ ð∂ x 1 ; ∂ x 2 Þ, the Hamiltonian, to lowest order, is whereσ i β α ≡ M ji σ j β α depends explicitly on ðϑ 1 ; ϑ 2 ; ϑ 3 Þ. It is straightforward to show that H ð0Þ admits a zero energy ground state given by where j0i is the fermionic vacuum and we have normalized Ψ ð0Þ with respect to Similarly we can expand the supercharges Q α ¼ Q It is easy to check that Q ð0Þ α Ψ ð0Þ ¼Q ð0Þβ Ψ ð0Þ ¼ 0. We are now tasked with finding the effective supercharges Q eff α ¼ hQ ð1Þ α i Ψ ð0Þ þ Á Á Á that act on the massless degrees of freedom ðr; θ; ϕÞ along the moduli space. At lowest order we find the supercharges (acting on gauge-invariant wave functions) are those of a free particle in R 3 and its fermionic superpartner, where we have labeled ðr; θ; ϕÞ in Cartesian coordinates as well as defined ðψ α ;ψ β Þ ≡ ðχ 3α ;χ β 3 Þ. Since the remaining gauge angles ðφ 1 ; φ 2 Þ have no kinetic terms in the effective theory along the moduli space, we need not consider them as dynamical variables and can treat ψ α as a fundamental field. Let us now compute the effective theory to next order in perturbation theory. Instead of computing this in the operator formalism, let us first invoke symmetry arguments to constrain what the answer should look like. The low energy effective theory on the moduli space should be a supersymmetric theory with four supercharges and an SOð3Þ R symmetry; therefore, it should fall in the class discovered in [26,27], To preserve the SOð3Þ symmetry, f should be a function of r ≡ j⃗ xj. Notice that (5.7) reduces to the theory of a free particle and its superpartner when f ¼ 1. Therefore we should find that at one-loop order f ¼ 1 þ c gr 3 , since ðgr 3 Þ −1 is our expansion parameter, with c to be determined. A calculation [22,23] reproduced in Appendix B gives c ¼ −3=2 or ð5:8Þ Analytic evidence for the numerically found supersymmetric ground states can be obtained by studying the Schrödinger problem associated with (5.7). We do not do this here, but we can gain some intuition by studying the existence of normalizable zero modes of the Laplacian on moduli space [28]: We can construct two normalizable zero modes as follows. The zero form is a zero mode of the Laplacian, but is not normalizable. To construct normalizable forms, we take These are normalizable within the domain r ∈ ½ð 3 2g Þ 1=3 ; ∞. Since there exists zero modes in this toy-moduli-space approximation, it would be interesting to study the set of ground states of (5.7) in more detail. VI. DISCUSSION In this paper we have studied the mini-BFSS/BMN model with gauge group SUð2Þ and uncovered numerical evidence for a set of supersymmetric ground states in the massless limit of the theory. In the massless limit the matrices can become widely separated. The effective theory on the moduli space has nontrivial interactions governed by a metric that gets generated on this moduli space at one loop. Our numerical evidence for zero energy ground states is limited to the N ¼ 2 case, but we can now safely establish that the vanishing Witten index of mini-BFSS does not conclusively imply supersymmetry breaking, even for N > 2. This should renew our interest in determining if supersymmetric states continue to exist at large N. Let us now discuss what may happen in the SUðNÞ case at large N. The quartic interaction in (2.5) can be rewritten as a commutator-squared interaction ðf ABC X i B X j C Þ 2 ∼ Trð½X i ; X j Þ 2 , where X i ≡ X i A τ A and τ A are the generators in the fundamental of SUðNÞ. Therefore, at tree level, along the moduli space there will be a set of N − 1 massless, noninteracting, point particles in R 3 (and their superpartners), each one corresponding to an element of the Cartan of SUðNÞ. At one loop there will be a correction to the moduli space metric, depending on the relative distances between these particles. Just as in the SUð2Þ case these corrections will come at order jr a − r b j −3 . One difference, however, is that there may be an enhancement of order N to this correction. It would certainly be interesting to see if we can isolate the jr a − r b j −3 corrections to the moduli space metric by taking a large N limit, as can be done in the D0-D4 system [28] and in the three-node Abelian quiver [29]. Perhaps we can adapt the methods in [30] for these purposes. The analysis in [22] seems to suggest that such a decoupling limit at large N is possible. Interestingly, it was shown in [29] that the one-loop effective action on the Coulomb branch of the three-node Abelian quiver exhibits an emergent conformal symmetry at large N. This conformal symmetry depends on the delicate balance between the form of the interaction potential and the metric on the moduli space, which has a similar jr a − r b j −3 form as in (5.8). It would be interesting to establish whether the SUðNÞ generalization of the model studied in this paper also has a nontrivial conformal symmetry at infinite N, broken by finite N effects. We save this problem for future work, but list here some reasons why this would be worth studying: (1) The BFSS matrix model has a holographic interpretation [11,[31][32][33]. At large N it is dual to a background of D0 branes in type IIA supergravity. In BFSS there is no correction to the moduli space metric and neither side of this duality is conformal. The BFSS matrix model is thus a theory of the 10D flat space S matrix. It would be interesting to understand the large N version of mini-BFSS in the context of holography along similar lines. Because of the large number of coupled degrees of freedom at large N and the reduced supersymmetry, the effective theory along the moduli space of mini-BFSS has a nontrivial metric and may potentially exhibit a nontrivial conformal fixed point along this moduli space, as happens for quiver quantum mechanics models with vector rather than matrix interactions [29,34]. To answer this question definitively we will need to compute the effective theory along the Coulomb branch for N ≫ 2 and check whether it is conformal. (2) New results have shown that a certain class of disordered quantum mechanics models, known as SYK for Sachdev-Ye-Kitaev, exhibits phenomenology of interest for near-extremal black holes (see [34][35][36][37][38][39][40][41] and references therein as well as [42][43][44][45] for models without disorder). These phenomena include an emergent conformal symmetry in the IR, maximal chaos [46], and a linear in T specific heat. Despite the successes of these models, they are not dual to weakly coupled gravity. BFSS is a large N gauged matrix quantum mechanics dual to weakly coupled Einstein gravity but, as we previously mentioned, it does not have an emergent conformal symmetry and remains a model of D particles in flat space. It would certainly be interesting if mini-BFSS fell in the universality class of quantum mechanics models with emergent conformal symmetry in the IR and maximal chaos, such as the SYK model and its nondisordered cousins, but remains dual to weakly coupled gravity. Recently [47,48] advocated the study of such matrix models for similar reasons. In the same vein the authors of [49] study classical chaos in BFSS numerically. (3) If this model, like SYK, is at all related to the holography of near-extremal black holes, then we can try to study its S matrix to gain some insight into the real time dynamics of black hole microstates. A numerical implementation of such a study in the context of similar supersymmetric quantum mechanics models with flat directions can be found in [50]. (4) The slow moving dynamics of a class of supersymmetric multicentered black hole solutions in supergravity is a superconformal quantum mechanics [51][52][53][54] with no potential, provided a near horizon limit is taken. It would be interesting to understand if there is some limit in which the multicentered black hole moduli space quantum mechanics and the large N matrix quantum mechanics on the moduli space coincide, perhaps as a consequence of nonrenormalization theorems as in [2]. ACKNOWLEDGMENTS It is a pleasure to thank Dionysios Anninos, Frederik Denef, Felix Haehl, Rachel Hathaway, Eliot Hijano, Jaehoon Lee, Eric Mintun, Edgar Shaghoulian, Benson Way, and Mark Van Raamsdonk for helpful discussions. We are particularly indebted to Dionysios Anninos, Frederik Denef, and Edgar Shaghoulian for their comments on an early draft. We made heavy use of Matthew Headrick's grassmann. APPENDIX A: REDUCED SCHRÖDINGER EQUATION In this Appendix we construct gauge-invariant highestweight wave functions of SOð3Þ J in each R-charge sector (up to 3) and use these to maximally reduce the Schrödinger equation via symmetries. R = 0 This sector of the theory was studied in [55][56][57], although without access to numerics. We repeat their analysis here. We wish to separate variables using the SOð3Þ J symmetry. We therefore want to write down the highest weight state satisfying J 3 jψi 0 ¼ jjψi 0 and J þ jψi 0 ¼ 0, with J AE ≡ J 1 AE iJ 2 . The rest of the spin multiplet can be obtained by acting on jψi 0 with J − up to 2j times. This, however, does not entirely fix the angular dependence of the wave function, as these two conditions only fix the dependence on up to two angles. Recall, however, that the operators ⃗ P commute with ⃗ J and ⃗ P 2 ¼ ⃗ J 2 , but ½H; ⃗ P ≠ 0. We will then write jψi 0 as a sum of terms with definite P 3 eigenvalue. That is, we write jψi 0 as Since the number of terms in the wave function grows with j, it will be cumbersome to give the reduced radial Schrödinger equation for arbitrary j. Instead, we will give the expressions for j ¼ 0; 1 2 ; 1. Before giving the reduced Schrödinger equations, it is worth noting that it has long been known that there exist no supersymmetric states in this sector [1]. The reason is that the supersymmetry equations Q α jψi 0 ¼Q β jψi 0 ¼ 0 are easy to solve and give which is non-normalizable. It is also known that the spectrum in this sector is discrete [57]. For parsimony let us definê with V defined in (3.19). Then for j ¼ 0 the reduced Schrödinger equation, obtained from H m jψi 0 ¼ E m jψi 0 , is simply For j ¼ 1=2 there is no mixing between the f AE1=2 ðx a Þ, and each satisfies where T was defined in (3.22). Finally, for j ¼ 1 we have Continuing on from the last section, we want to write down wave functions in the R ¼ 1 sector that are gauge invariant, and satisfy J 3 jψi 1 ¼ jjψi 1 and J þ jψi 1 ¼ 0. To do so, we will write our wave functions as where j0i is the fermionic vacuum and each term in the sum has a definite P 3 eigenvalue. The functions f p Aα that satisfy these conditions are We remind the reader that theχ α A are the gauge-invariant fermions defined in (3.23). The reduced Schrödinger equation for j ¼ 0 (and hence where A is a 6 × 6 matrix that can be written in terms of 2 × 2 blocks as follows: where the coordinates y a and z a (nonlinearly related to x a ) were defined in (3.20). Using the above definitions it is straightforward to write down the equations for j ¼ 1=2. These are 3. R = 2 As we can see, the number of equations keeps increasing with fermion number and spin. Therefore in this section and the next, we will only give the reduced Schrödinger equations for j ¼ 0. As before the general highest weight R ¼ 2 wave function admits a decomposition: To avoid overcounting let us set f p ABαβ ¼ 0 whenever B < A and similarly f p AAαβ ¼ 0 (no sum on indices) whenever β ≤ α. Imposing that J 3 jψi 2 ¼ jjψi 2 , J þ jψi 2 ¼ 0 and that each term in the sum has definite P 3 eigenvalue imposes that the functions f p ABαβ take on a particular form. These are (no sum on indices and A < B) Notice that even for j ¼ 0, determining the spectrum will involve solving a set of 15 coupled partial differential equations. We will label the set of functions Y p AB ≡ Y p 6−A−B and so on for the remaining functions. We also define the following vector of functions: The where D, L, and M are 15 × 15 matrices that can be written in terms of 3 × 3 blocks as follows: In these definitions, the L i are the 3 × 3 generators of SOð3Þ defined below (3.1). The d i are Whenever a matrix appears in an absolute value symbol j · j, the absolute value is to be applied to the entries of the matrix. The Schrödinger operator for j ¼ 1=2 will be a generalization of the above operator to one acting on 30 functions. We do not provide expressions for it here, but analyze its spectrum in the main text. R = 3 The highest weight R ¼ 3 wave functions take the form To avoid overcounting we set Because of the fermionic statistics f p AAAαβγ ¼ 0 identically. Imposing the highest weight condition forces f p 123αβγ to take the following form: Notice that for j ¼ 0 the reduced Schrödinger equation is a set of 20 coupled partial differential equations. We will give the Schrödinger operator acting on the following vector of functions: With Ψ 0 R¼3 defined, we are tasked with solving the following set of differential equations: where I, J, and K are 20 × 20 matrices that can be written in block form as follows: and J and K can be written in terms of 4 × 4 blocks as follows: where we have implicitly defined The Hamiltonian acting on the R ¼ 3, j ¼ 1=2 wave function will be a generalization of the above operator to one acting on 40 functions. We will not give the expression here, but we analyze the spectrum of the R ¼ 3, j ¼ 1=2 sector numerically in the main text. APPENDIX B: METRIC ON THE MODULI SPACE To determine the one-loop effective action for the ν ¼ 0 theory, we follow [28,58,59] and pass to the Lagrangian formulation of our gauge-quantum mechanics, including gauge-fixing terms and ghosts. We will use the background field method [60,61]-that is, we will expand the fields X i A ¼ B i A þX i A where B i A is a fixed background field configuration andX i A are the fluctuating degrees of freedom. We choose B i A ¼ δ A3 ⃗ x such that it parametrizes motion along the moduli space. The gauge-fixed Lagrangian is L ¼ L bos: þ L ferm: þ L g:f: þ L ghost ðB1Þ with L bos: and D t X i A ≡ _ X i A þ gf ABC A B X i C ; D t λ Aα ≡ _ λ Aα þ gf ABC A B λ Cα ; We further set ξ ¼ 1, corresponding to Feynman gauge. We can obtain the correction to the metric on moduli space by choosing a background field ⃗ x as follows [28]: where b is to be thought of as an impact parameter for a particle moving at speed v. We now Wick rotate t → −iτ, v → iγ, and A A → iA A and expand the action to quadratic order in fluctuating fields about the background field The idea is to integrate out all fields that obtain a mass, through interaction with the background field, at one loop.
2019-04-22T13:04:46.067Z
2019-09-23T00:00:00.000
{ "year": 2019, "sha1": "925284e38587eff8de7f9b72f136bec074b17418", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.100.066023", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b1066ed15604b53aca851a7b5916afc292846a88", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218760102
pes2o/s2orc
v3-fos-license
Inflammatory and Coagulative Considerations for the Management of Orthopaedic Trauma Patients With COVID-19: A Review of the Current Evidence and Our Surgical Experience Summary: Mounting evidence suggests that the pathogenesis of coronavirus disease 2019 (COVID-19) involves a hyperinflammatory response predisposing patients to thromboembolic disease and acute respiratory distress. In the setting of severe blunt trauma, damaged tissues induce a local and systemic inflammatory response through similar pathways to COVID-19. As such, patients with COVID-19 sustaining orthopaedic trauma injuries may have an amplified response to the traumatic insult because of their baseline hyperinflammatory and hypercoagulable states. These patients may have compromised physiological reserve to withstand the insult of surgical intervention before reaching clinical instability. In this article, we review the current evidence regarding pathogenesis of COVID-19 and its implications on the management of orthopaedic trauma patients by discussing a case and the most recent literature. INTRODUCTION The coronavirus disease 2019 (COVID-19) is a novel viral illness that is precipitated by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). 1 As of April 19, 2020, COVID-19 has been diagnosed in nearly 2.3 million people worldwide, has led to over 150,000 deaths, and has been deemed a pandemic by the World Health Organization. 2 In its most severe form, COVID-19 is characterized by a cytokine release syndrome (CRS) that progresses to multisystem organ failure and death. 3,4 Recent reports also demonstrate that patients are predisposed to thromboembolic disease through both the direct and indirect effects of COVID-19. 1,5 Although some patients require early intensive care, the majority experience a more benign clinical course. A certain subset of patients, however, mount a large inflammatory response despite initially appearing well. 6 Early monitoring of inflammatory markers is being used to help predict which patients will eventually necessitate higher levels of care. 7 A similar hyperinflammatory and hypercoagulable response leading to multisystem organ failure is also seen in patients after severe polytrauma. [8][9][10] A less severe initial traumatic insult ("first hit") also has the capacity to produce the same systemic response if it is followed by persistent physiologic derangements or subsequent proinflammatory interventions ("second hit"). 8,11,12 Damage control orthopaedics is often used in polytrauma patients to minimize the second hit and prevent subsequent acute respiratory distress syndrome (ARDS). [13][14][15] Although a trauma is often thought of as the first hit in the two-hit theory, other hyperinflammatory and hypercoagulable states, such as in COVID-19, may also act as part of the first hit in patients with orthopaedic injuries. This may be especially relevant in patients who appear well but have developed a large inflammatory response. 16 Indeed, recent evidence suggests alarmingly high intensive care unit (ICU) admission and mortality rates after elective surgery on asymptomatic patients in the incubation period of COVID-19. 16,17 However, little is known about how orthopaedic trauma and subsequent fracture fixation modulates the inflammatory response in patients with COVID-19. A better understanding of this relationship can inform the development of evidencebased management strategies in these patients and limit admissions to overcrowded ICUs. To demonstrate and further define these developing theories on the coagulative and inflammatory risks associated with the surgical treatment of trauma patients with COVID-19, we will present an unexpected outcome on such a patient at our institution. The purpose of this case presentation is to drive the subsequent discussion and literature review on the management of patients with COVID-19 presenting with orthopaedic trauma. OUR SURGICAL EXPERIENCE A woman in her 80s with a history of dementia and myeloproliferative disorder presented with an isolated extraarticular fracture of the left distal femur (Fig. 1) and acute nonocclusive deep vein thromboses (DVTs) of the left popliteal and gastrocnemius veins. On presentation, she was afebrile and denied any respiratory symptoms. Her preoperative peripheral capillary oxygen saturation was between 90% and 97% on room air and at times requiring 2 L of supplemental oxygen. Physical examination revealed that the patient not to be in respiratory distress with clear lung sounds. Admission chest radiograph is shown in Fig. 2, demonstrating no major consolidation or infiltrates. Laboratory evaluation showed no leukocytosis, anemia, or thrombocytosis. Given that she was a nursing home resident with known exposure to multiple COVID-19-positive residents, she was tested for the disease and found to be positive. A multidisciplinary approach was taken for the care of this patient, including orthopaedics, internal medicine, infectious disease, anesthesia, and vascular surgery. She was deemed an asymptomatic COVID+ patient with no concern for her respiratory function. A shared decision was made to proceed with surgical fixation to allow for improved mobility, healing, and pain control. Within 12 hours of initial presentation, the patient underwent reamed, locked retrograde intramedullary nailing of her left distal femur fracture. Intramedullary nailing was chosen because it is the senior author's preferred treatment for distal third femur fractures and the intramedullary implant would assist in preventing medialization of the distal segment. Preoperative templating estimated an isthmus of 16-17 mm. Plan was for placement of a 13-mm diameter nail. The patient tolerated the early steps of the procedure well. During passage of the 14-mm diameter reamer and obtaining cortical chatter, the patient became acutely hypoxic and hypotensive requiring maximal FiO2 (P/F ratio 102) and increasing vasopressor requirements. After reaming, the patient improved marginally to the point where she was amenable to intramedullary nail placement. After placement of the intramedullary nail, a heparin drip was immediately initiated for presumed intraoperative pulmonary embolism. She remained intubated at the completion of the procedure and was transferred to the ICU. Laboratory values obtained immediately postoperatively are provided in Table 1, along with postoperative laboratory trends. The patient's initial leukocytosis and elevated troponin, brain natriuretic peptide, fibrinogen, D dimer, ferritin, and C-reactive protein (CRP) are noted. The patient's P/F ratio at that time declined to 88 (PaO2 88 mm Hg, FiO2 100%), indicating severe respiratory failure. Upon arrival to the ICU, a bedside echocardiogram was performed revealing the right ventricular dilation and septal flattening indicative of right heart strain. Computed tomography pulmonary angiogram demonstrated a nonocclusive right main pulmonary artery embolus with right heart strain, left upper lobe segmental pulmonary artery embolus, and mosaic, ground glass attenuation of the lung parenchyma concerning for viral pneumonia and fat embolism. Subsequent pulmonary angiography redemonstrated the right-sided lobar embolus and elevated mean pulmonary artery pressure. Right-sided percutaneous pulmonary suction thrombectomy was performed. Blood clot and fat emboli were removed with no significant residual lobar or segmental pulmonary emboli on follow-up angiogram (Fig. 3). The patient's relatively small embolic burden did not correlate with her clinical presentation of respiratory failure with right heart strain. In addition, the patient's hemodynamic response and lack of improvement after embolectomy was not characteristic of other experiences with similar volumes of thrombus or fat extraction. The patient required inotropic support for the first 48 hours postoperatively. She was successfully extubated on postoperative day 9 and transferred out of intensive care on postoperative day 13. ORTHOPAEDIC TRAUMA AND COVID-19 Although the orthopaedic surgeons' role in mitigating the COVID-19 crisis may appear disparate compared with our medical colleagues, the management of patients with COVID-19 undergoing nonelective orthopaedic trauma surgery demands thoughtful consideration. Emerging evidence in the medical literature suggests that a cytokine storm, also known as CRS, plays an integral role in severe COVID-19. Severe blunt trauma and the resulting surgical intervention similarly initiate a sequence of inflammatory events resulting in clinical instability. 18 In the case we have presented, an asymptomatic COVID-19 patient with a femur fracture urgently treated with intramedullary fixation, as is the standard of care for the treatment of femoral shaft fractures. Intraoperatively, she developed pulmonary and fat emboli resulting in a systemic hyperinflammatory response and acute cardiopulmonary collapse. Our hypothesis is that the patient's diagnosis of COVID-19 amplified the initial inflammatory response to the low-energy traumatic insult ("first hit") that was not clinically apparent preoperatively. In addition, the hypercoagulable state secondary to COVID-19 and the inflammatory load of intramedullary reaming, fat emboli, and pulmonary embolism resulted in a "second hit" that may have cumulatively pushed our patient past a "tipping point" and into respiratory failure (Fig. 4). We would not have expected this type of response during intramedullary fixation of a low-energy fracture in a COVID-negative patient without any preoperative respiratory symptoms or illness. Pathogenesis of COVID-19: Hyperinflammation and Thromboembolic Disease The clinical presentation of COVID-19 resembles viral pneumonia, with severe cases rapidly progressing to ARDS. 19 CRS is implicit in the pathogenesis of these severe cases of COVID-19. Although the full immunologic response elicited by COVID-19 is still not fully elucidated, current reports indicate elevations of a distinct set of proinflammatory cytokines, including interleukin (IL)-2, IL-6, and tumor necrosis factor (TNF)-a. 3,20 Treatment protocols for severe COVID-19 are aimed at attenuating this life-threatening inflammation, such as with the IL-6-inhibiting agent tocilizumab. 3,20 These treatments are currently reserved for the population of patients with a declining clinical status paired with worsening inflammatory markers. 3 The overwhelming inflammatory response in these patients is believed to cause diffuse alveolar damage and endothelial dysfunction. The dysfunctional endothelium thus becomes prothrombotic, which predisposes patients to microangiopathy and microthrombi. 5,20 The clinical implications of this are profound, as the presence of vasculitis and prothrombotic state can make patients vulnerable to pulmonary embolism, which can exacerbate hypoxemia caused by ARDS. 5 This is a crucial point for consideration in asymptomatic COVID+ patients presenting with orthopaedic trauma, as we hypothesize that subclinical levels of systemic inflammation from COVID-19 may predispose to adverse outcomes. This theory is supported by a recent report out of Wuhan that has demonstrated alarmingly high mortality and ICU admission rates after elective surgery on asymptomatic patients in the incubation period of COVID-19. 17 In addition to these microvascular aberrations, significant coagulation abnormalities appear to be associated with the CRS and implicated in disease progression. 1,6,19,21,22 These hemostatic derangements include increased clot strength, increased fibrinogen and fibrin degradation product levels, elevated D-dimer levels, decreased prothrombin time and international normalized ratio times, as well as patterns of disseminated intravascular coagulation. [21][22][23] Such changes also predispose these patients to thrombotic events such as venous thromboembolism, much like the previous zoonotic virus outbreaks [SARS and Middle East respiratory syndrome (MERS-CoV)]. 1,24 This is convincingly demonstrated by an amounting number of reported cases of young patients with large-vessel strokes as a presenting feature of COVID-19. 25 The most commonly observed hemostatic abnormality in these patients is elevated D-dimer levels (.1 mg/mL), which have specifically been associated with an increased risk of ICU admission, mechanical intubation requirement, and death. 1,26 For these reasons, use of empiric anticoagulation at therapeutic doses on patients with highly elevated D-dimer levels is being implemented by some intensivists and is currently supported by some experts in the American College of Cardiology. 1 In fact, early reports have indicated decreased mortality in severe COVID-19 patients with coagulopathy who were treated with anticoagulation. 22 It is possible that the coagulation abnormalities associated with this disease may have contributed to the development of the acute DVTs in our patient, and that the intraoperative initiation of a heparin drip may have attenuated the effects of pulmonary emboli. These coagulative effects of COVID-19 and proposed benefits of anticoagulative therapies are important to consider when treating orthopaedic trauma patients. In COVID+ patients presenting with orthopaedic trauma, the hyperinflammatory and hypercoagulable state caused by the virus may also have significant implications on blunt injury pathophysiology. After severe blunt trauma, damaged tissue induces a local and systemic inflammatory response mediated by the release of the cytokines TNF-a, IL-1b, and, most importantly, IL-6. The severity of this inflammatory response and subsequent clinical course is determined by the following 3 factors: (1) the degree of the initial injury ("first hit"), (2) the individuals' amounted biological response, and (3) the type of treatment ("second hit"). 10 These 3 factors contribute toward an amounting inflammatory cascade that increases until a patient's biologic reserve is overwhelmed, and a "tipping point" is reached (Fig. 4). 10 The "tipping point" refers to a state of clinical instability associated with microvascular injury, interstitial edema, hemodynamic lability, and end-organ failure. 10 The "first hit" can be reliably quantified in traumatized patients by measuring IL-6 levels, which have been shown to be correlated with increased incidence of multiple organ failure, and patient survival. 27 Emerging studies on COVID-19 have similarly observed correlations between IL-6 levels and disease severity, which suggests a potential mutual inflammatory pathway, with that of trauma patients (although stemming from a distinct inciting event). 24,28 Therefore, COVID-19 may decrease a trauma patient's biologic reserve before reaching a physiologic "tipping point" (Fig. 4). In other words, COVID-19 may unfavorably amplify the "first hit" by contributing a significant biologic response before the injury, whether the patient is symptomatic or not. This may manifest clinically as decreased cardiopulmonary capacity in these patients, albeit to variable extents based on the severity of their disease. This hypothesis is supported by a recent case series, demonstrating a 40% mortality rate in symptomatic COVID+ fracture patients. 16 Among the 3 aforementioned factors contributing toward the "tipping point," the treatment is the sole modifiable factor. In particular, long bone fractures treated with intramedullary fixation are at risk for fat embolization in addition to the inflammatory response from this "second FIGURE 3. Products removed from percutaneous pulmonary suction embolectomy. The yellow products are presumed intramedullary fat, and the red products are the clot burden. Image quality is suboptimal because the camera was required to be in a plastic bag due to COVID+ status. hit." 18,29 These events have been demonstrated to result in pulmonary insult, changes in markers of coagulation, and, at times, cardiovascular strain. The coagulative effects are due to simultaneous activation of both the fibrinolytic and coagulation pathways, 29,30 and the inflammatory effects are mediated by IL-6. 30 As for the cardiopulmonary effects of intramedullary fixation, intraoperative measurements on humans using transesophageal echocardiogram and cardiopulmonary monitoring have demonstrated consistent pulmonary arterial pressure elevations, and in severe cases-significant hypoxemia and right heart strain, during guide-wire insertion and canal reaming. 29,31 These findings are mostly attributable to fat emboli, which occur in approximately 90% of trauma patients but are only clinically apparent in 1%-5%. 31 In addition, ARDS is a potential consequence of the inflammatory response from intramedullary fixation of femur fractures. 11,32 Current Outcomes Data In the aforementioned study on elective surgery outcomes of asymptomatic COVID+ patients, they reported a 44.1% (15 of 34) ICU admission rate postoperatively, which is significantly higher than the reported 26.1% in hospitalized nonsurgical COVID-19 patients. 17 Furthermore, the reported 20.5% (7 of 34) mortality rate was significantly higher than the overall case-fatality rate of 2.3% in nonsurgical COVID-19 patients. 33 Patients in that study also developed COVID symptoms on average 2.6 days postoperatively, and the median time from symptom onset to the development of dyspnea was 3.5 days. In comparison, a previous study on nonsurgical COVID-19 patients reported that the median time for symptom onset was 8.0 days. 19 The difference is alarming. The most common postsurgical complication in the ICUadmitted patients in that study was ARDS, and over half of these patients received subsequent immunosuppressive medications to attenuate the diseases' inflammatory response. The authors of this study reached a similar conclusion as ours that surgical stress occurring during the incubation period of SARS-CoV-2 infection exacerbates disease progression and severity. 17 Another recent study out of China by Mi et al 16 reported on 10 COVID+ fracture patients, 3 of which did not have any signs or symptoms of the disease. Three patients underwent surgical fixation of their fractures, and the rest were treated nonoperatively because of their declining clinical status. They reported that 4 of these 10 patients died at 8 and 14 days after admission. Abnormal D-dimer levels were present in all patients, elevated CRP in 90% of patients, but normal prothrombin times in 70% of patients. 16 The authors of this study also concluded that the characteristics and prognosis of COVID-19 patients with fractures tend to be more severe than those reported for COVID+ patients without fractures. 16 Most recently, Catellani et al 34 published their outcomes in Italy on the treatment of proximal femoral fragility fractures in patients with COVID-19. They reported on 16 symptomatic COVID+ patients with confirmed viral pneumonia on chest computed tomography. Three of these patients died of respiratory failure before surgery, the remaining 13 series underwent surgical stabilization of their fractures with either an intramedullary nail or hemiarthroplasty. Nine of these patients were stable postoperatively, whereas 4 patients died of respiratory failure on the first, third, and seventh day postoperatively. 34 The authors of this study concluded that surgical stabilization of proximal femoral fractures resulted in stabilization of respiratory parameters but did mention that elderly patients with comorbidities and symptomatic COVID-19 are not eligible for orthopaedic surgery. 34 SUMMARY AND RICH FUTURE DIRECTIONS The current environment in the health care system is unprecedented. There are little to no data to guide us in our decision making when treating patients with COVID-19. The disease manifests in ways we would have never been able to predict. The level of cytokine response, hypercoagulability, and pulmonary dysfunction associated with the COVID-19 virus may predispose to a catastrophic "second hit" after even low-energy trauma. This is in line with the previous hypotheses that postsurgical complications are more accurately predicted by assessing objective data covering several physiological systems (coagulation, acid-base changes, softtissue damage, etc.) compared with using data from a single physiologic system (eg, acidemia). 35 Regarding our patient, we hypothesize that COVID-19 may have lowered her physiologic reserves to withstand the relatively low intraoperative embolus burden. Interestingly, not only did the clot burden fail to correlate with the patient's physiologic status after nailing, she did not show any significant improvement after embolectomy of the thrombus and fat, indicating another disease process such as ARDS. CONSIDERATIONS FOR PATIENTS WITH COVID-19 SUSTAINING ORTHOPAEDIC TRAUMA The following precautions may be appropriate when dealing with unprecedented challenges associated with COVID-19 patients presenting with orthopaedic trauma injuries. 1. Test for COVID-19 in all patients with unknown disease status upon admission. Consider obtaining baseline inflammatory markers includ- ing IL-6 (if available), D dimer, and CRP, which may aid in surgical decision making and prognosis. 3. Consider obtaining lower extremity duplex ultrasound in all patients testing positive for COVID-19 and high-risk fractures. 4. For patients with confirmed proximal DVT, consider either aggressive intraoperative anticoagulation or placement of an inferior vena cava filter preoperatively. 5. Consider alternative orthopaedic trauma management strategies (eg, damage control orthopaedics and nonoperative treatment) in patients with severe cases of symptomatic COVID-19, even in low-energy trauma. 6. Consider surgical treatments that avoid canal instrumentation. 7. Consider avoiding excessive reaming if intramedullary fixation is performed. CONCLUSION Mounting evidence suggests that the pathogenesis of COVID-19 involves a hyperinflammatory response predisposing patients to thromboembolic disease and acute respiratory distress. In the setting of severe blunt trauma, damaged tissues induce a local and systemic inflammatory response through similar pathways to COVID-19. As such, patients with COVID-19 sustaining orthopaedic trauma injuries may have an amplified response to the traumatic insult because of their baseline hyperinflammatory and hypercoagulable states. Careful consideration and risk/benefit analysis, including preoperative evaluation of systemic inflammation and respiratory status, is paramount in patients with COVID-19 presenting with orthopaedic trauma injuries.
2020-05-21T09:11:49.707Z
2020-05-14T00:00:00.000
{ "year": 2020, "sha1": "8be2582b4682cba436c55250d1e9d1ad6d8a2d07", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7302072?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "676dbd0b4cbab767669069380ec6828e44b1c7d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119210808
pes2o/s2orc
v3-fos-license
The Ratio of CO to Total Gas Mass in High Redshift Galaxies Walter et al. (20012) have recently identified the J=6-5, 5-4, and 2-1 CO rotational emission lines, and [C_{II}] fine-structure emission line from the star-forming interstellar medium in the high-redshift submillimeter source HDF 850.1, at z = 5.183. We employ large velocity gradient (LVG) modeling to analyze the spectra of this source assuming the [C_{II}] and CO emissions originate from (i) separate unvirialized regions, (ii) separate virialized regions, (iii) uniformly mixed unvirialized region, and (iv) uniformly mixed virialized regions. We present the best fit set of parameters, including for each case the ratio $\alpha$ between the total hydrogen/helium gas mass and the CO(1-0) line luminosity. We also present computations of the ratio of H_{2} mass to [C_{II}] line-luminosity for optically thin conditions, for a range of gas temperatures and densities, for direct conversion of [C_{II}] line-luminosities to"dark-H_{2}"masses. For HDF 850.1 we find that a model in which the CO and C^{+} are uniformly mixed in gas that is shielded from UV radiation, requires a cosmic-ray or X-ray ionization rate of $\zeta \approx$ 10^{-13} s^{-1}, plausibly consistent with the large star-formation rate ($\sim$ 10^{3} M$_{\odot}$ yr^{-1}) observed in this source. Enforcing the cosmological constraint posed by the abundance of dark matter halos in the standard $\Lambda$CDM cosmology and taking into account other possible contributions to the total gas mass, we find that three of these four models are less likely at the 2$\sigma$ level. We conclude that modeling HDF 850.1's ISM as a collection of unvirialized molecular clouds with distinct CO and C^{+} layers, for which $\alpha$ = 0.6 M$_{\odot}$ (K km s^{-1} pc^{2})^{-1} for the CO to H_{2} mass-to-luminosity ratio, (similar to the standard ULIRG value), is most consistent with the $\Lambda$CDM cosmology. INTRODUCTION Observations of high-redshift CO spectral line emissions have greatly increased our knowledge of galaxy assembly in the early Universe. At redshifts z∼2, this includes the discovery of turbulent star-forming disks with cold-gas mass fractions and star-formation rates significantly larger than in present day galaxies (Daddi et al. 2010;Genzel et al. 2012;Magnelli et al. 2012;Tacconi et al. 2010Tacconi et al. , 2012. The high star-formation rates are correlated with large gas masses and luminous CO emission lines (Kennicutt & Evans 2012) observable to very high redshifts. A prominent example is the luminous submillimeter and Hubble-Deep-Field source HDF 850.1, which is at a redshift of z = 5.183 as determined by recent detections of CO(6-5), CO(5-4), and CO(2-1) rotational line emissions, and also [CII] fine-structure emission in this source (Walter et al. 2012). The high redshift of HDF 850.1 offers the opportunity of setting cosmological constraints on the conversion factor from CO line luminosities to gas masses, via the implied dark matter masses and the expected cosmic volume density of halos of a given mass. Such analysis is the subject of our paper. CO emitting molecular clouds, which provide the raw material for star formation, are usually assumed to have undergone complete conversion from atomic to molecular hydrogen. However, since H2 has strongly forbidden rotational transitions and requires high temperatures (∼500 K) to excite its rotational lines, it is a poor tracer of cold ( 100 K) molecular gas. Determining H2 gas masses in the interstellar medium (ISM) of galaxies has therefore relied on tracer molecules. In particular, 12 CO is the most commonly employed tracer of ISM clouds; aside from being the most abundant molecule after H2, CO has a weak dipole moment (µe = 0.11 Debye) and its rotational levels are thus excited and thermalized by collisions with H2 at relatively low molecular hydrogen densities (Solomon & Vanden Bout 2005). The molecular hydrogen gas mass is often obtained from the CO luminosity by adopting a mass-to-luminosity conversion factor α = MH 2 /L ′ CO(1−0) between the H2 mass and the J =1-0 115 GHz CO rotational transition (Bolatto et al. 2013). The value for α has been empirically calibrated for the Milky Way Galaxy by three independent techniques: (i) correlation of optical extinction with CO column densities in interstellar dark clouds (Dickman 1978); (ii) correlation of gamma-ray flux with the CO line flux in the Galactic molecular ring (Bloemen et al. 1986;Strong et al. 1988);and (iii) observed relations between the virial mass and CO line luminosity for Galactic GMCs (Solomon et al. 1987). These methods have all arrived at the conclusion that the conversion factor in our Galaxy is fairly constant. The standard Galactic value is α = 4.6 M⊙/(K km −1 pc 2 ). Subsequent studies of CO emission from unvirialized regions in Ultra-Luminous Infrared Galaxies (ULIRGs) found a significantly smaller ratio. For such systems, α = 0.8 M⊙/(K km −1 pc 2 ) (Downes & Solomon 1998). These values have been adopted by many (Sanders et al. 1988;Tinney et al. 1990; Wang et al. 1991;Walter et al. 2012) to convert CO J=1-0 line observations to total molecular gas masses, but without consideration of the dependence of α on the average molecular gas conditions found in the sources being considered. Since all current observational studies of α leave its range and dependence on the average density, temperature, and kinetic state of the molecular gas still largely unexplored, its applicability to other systems in the local or distant Universe is less certain (Papadopoulos et al. 2012). In this paper, we estimate the CO emitting gas masses in HDF 850.1 using the large-velocity-gradient (LVG) formalism to fit the observed emission line spectral energy distribution (SED) for a variety of model configurations, and for a comparison to the Galactic and ULIRG conversions. In §2 we outline the details of the LVG approach, including an overview of the escape probability method and a derivation of the gas mass from the line intensity of the modeled source. In §3, we present the best fit set of parameters that reproduce HDF 850.1's detected lines and calculate the corresponding molecular gas mass, assuming the CO and [CII] emission lines originate from (i) separate virialized regions, (ii) separate unvirialized regions, (iii)uniformly mixed virialized regions, and (iv) uniformly mixed unvirialized regions. The inferred gas masses enable us to set lower-limits on the dark-matter halo mass for HDF 850.1. In §4 we compare the estimated halo-masses to the number of such objects expected at high redshift in ΛCDM cosmology, and show that models with lower values of α are favored. We conclude with a discussion of our findings and their implications in §5. LARGE VELOCITY GRADIENT MODEL We start by describing our procedure for quantitatively analyzing the [CII] and CO emission lines detected at the position of HDF 850.1 using the large velocity gradient (LVG) approximation. We consider a multi-level system with population densities of the i th level given by ni. The equations of statistical equilibrium can then be written as : where l is the total number of levels included; since the set of l statistical equations is not independent, one equation may be replaced by the conservation equation where ntot is the number density of the given species in all levels. In our application, ntot = nCO. Following the notation of Poelman & Spaans (2005), Rij is given in terms of the Einstein coefficients, Aij and Bij , and the collisional excitation (i < j )and de-excitation rates (i > j ) Cij : where Jij is the mean radiation intensity corresponding to the transition from level i to j averaged over the local line profile function φ(νij). The total collisional rates Cij depend on the individual temperature-dependent rate coefficients and collision partners, usually H2 and H for CO rotational excitation. The difficulty in solving this problem is that the mean intensity at any location in the source is a function of the emission and varying excitation state of the gas all over the rest of the source, and is thus a nonlocal quantity. To obtain a general solution of the coupled sets of equations describing radiative transfer and statistical equilibrium, we adopt the approach developed by Sobolev (1960) and extended by Castor (1970) and Lucy (1971) and assume the existence of a large velocity gradient in dense clouds. This assumption is justified given the interstellar molecular line widths which range from a few up to a few tens of kilometers per second, far in excess of plausible thermal velocities in the clouds (Goldreich & Kwan 1974). They suggest that these observed velocity differences arise from large-scale, systematic, velocity gradients across the cloud, a hypothesis that lies in accord with the constraints provided by observation and theory. In the limit that the thermal velocity in the cloud is much smaller than the velocity gradient across the radius of the cloud, the value of Jij at any point in the cloud, when integrated over the line profile, depends only upon the local value of the source function and upon the probability that a photon emitted at that point will escape from the cloud without further interaction. Thus Jij becomes a purely local quantity, given by: where Sij is the line source function, Detection of four lines tracing the star-forming interstellar medium in HDF 850.1 (Walter et al. 2012) assumed constant through the medium. In this expression gi and gj are the statistical weights of levels i and j respectively, βij is the "photon escape probability", and Bij (νij, TB) is the background radiation with temperature TB. In our models we set TB to the CMB temperature of 16.9 K at z = 5.183. We ignore contributions from warm dust (da Cunha et al. 2013). For a spherical homogenous collapsing cloud, the probability that a photon emitted in the transition from level i to level j escapes the cloud is given by where τij is the optical depth in the line, The equations of statistical equilibrium are therefore reduced to the simplified form and can be solved through an iterative process to give the fractional level populations ni/ntot (for a given choice of densities for the collision partners, usually H2 but also H, and kinetic temperature T kin ). Assuming the telescope beam contains a large number of these identical homogeneous collapsing clouds (e.g., Hailey-Dunsheath 2009), the corresponding emergent intensity of an emission line integrated along a line of sight is then simply where χ is the abundance ratio, χ ≡ ntot/n, where n is the total hydrogen gas volume density (cm −3 ) and N is the hydrogen column density (cm −2 ). Given a series of observed lines with frequency νij , one can identify a set of characterizing parameters that best reproduces the observed line ratios and intensity magnitudes; among these parameters are the cloud's kinetic temperature (T kin ), velocity gradient (dv/dr ), gas density n and collision partner (H and H2) gas fractions, abundance ratio (χ), and column density (N ). For a spherical geometry, this column density can then be further related to the molecular gas mass of the cloud in the following way where the factor µ = 1.36 takes into account the helium contribution to the molecular weight and R = DAθ/2 is the effective radius of the cloud, with DA being the angular diameter distance to the source and θ the beam size of the line observations. N is defined in terms of the column density obtained from the LVG calculation, N ′ = N (1 + z) 4 where the (1 + z) 4 multiplicative factor reflects the decrease in surface brightness of a source at redshift z in an expanding universe. In the case where the emitting molecular clouds are gravitationally bound, applying the virial theorem to a homogenous spherical body yields the following constraint (Goldsmith 2001 where a = 7.77×10 −3 if n = nH 2 and a = 5.50×10 −3 if n = nH . The velocity gradient is inversely proportional to the dynamical time scale. In models where the clouds are assumed to be virialized, dv/dr and n are no longer independent input parameters of the model, but rather vary according to equation (11). For the optically thick 12 CO J=1-0 transition line (β ≈ 1/τ ), carrying out the LVG calculations with this additional virialization condition leads to a simple relation between the gas mass and the CO(1-0) line luminosity, (12) where the excitation temperature Texc ≈ T kin when the emission line is thermalized. (In this expression we assume complete conversion to H2 so that the gas mass is the H2 mass.) Empirically, the Galactic value of this mass-toluminosity ratio for virialized objects bound by gravitational n H 2 = 10 cm − 3 n H 2 = 10 2 cm − 3 n H 2 = 10 3 cm − 3 n H 2 = 10 4 cm − 3 Figure 1. The gas mass to [CII] luminosity ratio for a C + to H 2 abundance ratio of χ C + =10 −4 as a function of the kinetic temperature T kin at a molecular hydrogen number density of n H 2 = 10, 10 2 , 10 3 , and 10 4 cm −3 . Calculations were made in the optically thin regime, assuming the fine-structure transition J=3/2→1/2 is due solely to spontaneous emission processes and collisions with ortho-and para-H 2 . ANALYSIS OF HDF 1900.1 GHz (158 µm), one of the main cooling lines of the star-forming in stellar medium, has also been detected (Table 1). These lines have placed HDF 850.1 in a galaxy over density at z = 5.183, a redshift higher than those of most of the hundreds of submillimetre-bright galaxies identified thus far. Walter et al. (2012) used an LVG model to characterize the CO spectral energy distribution of HDF 850.1 and found that the observed CO line intensities could be fit with a molecular hydrogen density of 10 3.2 cm −3 , a velocity gradient of 1.2 km s −1 pc −1 , and a kinetic temperature of 45 K. Then, assuming α = 0.8 M⊙(K km s −1 pc 2 ) −1 as for ULIRGs, they used the 1-0 line luminosity inferred from their LVG computation to infer that MH 2 = 3.5×10 10 M⊙. However, as Papadopoulos et al. (2012) argue, adopting a uniform value of α for ULIRGs neglect its dependence on the density, temperature, and kinematic state of the gas; this may limit the applicability of computed conversion factors to other systems in the local or distant Universe. Here, we broaden the LVG analysis carried out in Walter et al. 2012 and use the LVG-modeled column density to estimate the total gas mass of the source. We present several alternative models, each subject to a slightly different constraint. In particular, we first consider the case where the CO and [CII] lines originate from different regions of the molecular cloud. This picture is consistent with the standard structure of PDRs in which there is a layer of almost totally ionized carbon at the outer edge, intermediate regions where the carbon is atomic, and internal regions where the carbon is locked into CO (Sternberg & Dalgarno 1995;Tielens & Hollenbach 1985;Wolfire et al. 2010). In this pic-ture then, the hydrogen is fully molecular in the CO emitting regions. We then consider models in which the CO molecules and C + ions are uniformly mixed, such that the line emissions originate in gas at the same temperature and density. These models resemble conditions found in UV-opaque cosmic-ray dominated dark cores of interstellar molecular clouds where the chemistry is driven entirely by cosmic-ray ionization. For HDF850.1, the ionization rates could be significantly higher than in the Milky Way, leading to enhanced C + in the UV-shielded regions. In both instances, we perform our LVG computations assuming (a) virialized clouds for which the virialization condition (Equation [11]) has been imposed and (b) gravitationally-unbound clouds. To carry out these computations, we use the Mark & Sternberg LVG radiative transfer code described in Davies et al. (2012). Energy levels, line frequencies and Einstein A coefficients are taken from the Cologne Database for Molecular Spectroscopy (CDMS).The excitation and deexcitation rates of the CO rotational levels that are induced by collisions with H2 are taken from Yang et al. (2010) while the C + collisional rate coefficients come from Flower & Launay (1977) and Launay & Roueff (1977). Separate CO, C + Virialized Regions We first consider a model in which the CO and [CII] emission lines detected at the position of HDF 850.1 originate in separate regions of the molecular gas cloud, regions which are not necessarily at the same temperature and number density. For self-gravitating clouds in virial equilibrium, the velocity gradient is no longer an independent input parameter of the LVG model, but varies with nH 2 according to equation (11). To find the unique solution that yields the two observed line ratios, I CO(6−5) /I CO(2−1) and I CO(6−5) /I CO(5−4) , we assume a canonical value of χ CO = 10 −4 for the relative CO to H2 abundance and vary the remaining two parameters, temperature and molecular hydrogen density, over a large volume of the parameter space. We find, under this virialization constraint, that the observed CO lines are best fit with a kinetic temperature of 70 K and a molecular hydrogen number density of 10 2.6 cm −3 (with a corresponding velocity gradient of ≈ 0.16 km s −1 pc −1 ). The column density that yields the correct line intensity magnitudes is 4.2×10 19 cm −2 , corresponding to a molecular hydrogen gas mass of MH 2 ≈ 2.13×10 11 M⊙. The H2 mass to CO luminosity conversion factor obtained in this model is α = 5.1 M⊙(K km s −1 pc 2 ) −1 , a value similar to the Galactic conversion factor observed for virialized molecular clouds in the Milky Way, suggesting that HDF 850.1 may have some properties in common with our Galaxy. Reducing the relative CO to H2 abundance by a factor of two, to χ CO = 5×10 −5 , results in a best fit solution with a molecular hydrogen gas mass of MH 2 ≈ 2.68×10 11 M⊙, nearly 25% larger than the value obtained assuming χ CO = 10 −4 . Ranges on the fit parameter consistent with the observational uncertainties are listed in Table 2. Separate CO, C + Unvirialized Regions We then consider a model in which the CO and [CII] emission lines are assumed to originate from separate regions of gravitationally-unbound molecular clouds. Since unvirialized clouds generally demonstrate a higher degree of turbulence relative to their virialized counterparts, we expect the velocity gradient in this model to be greater than the velocity gradient obtained for the virialized model, (dv/dr) virialized ≈ 0.16 km s −1 pc −1 . We therefore fix the velocity gradient to be ten times the virialized value, (dv/dr) unvirialized = 1.6 km s −1 pc −1 , and, assuming a canonical value of χ CO = 10 −4 , find the solution that yields the two observed line ratios by varying T and nH 2 . We find, under these assumptions, that the CO SED is best fit with a molecular hydrogen density of 10 3 cm −3 and a kinetic temperature of 100 K. For this set of parameters, the beamaveraged H2 column density is NH 2 ≈ 1.0×10 19 cm −2 , giving an associated molecular gas mass of MH 2 ≈ 5.16×10 10 M⊙. This estimate of the gas mass is nearly 50% larger than that obtained by Walter et al. (2012) by applying the H2 massto-CO luminosity relation with the typically adopted conversion factor for ULIRGs, α = 0.8 M⊙ (K km s −1 pc 2 ) −1 . Given our inferred H2 gas mass and predicted CO(1-0) line luminosity from our LVG fit, we find that α = 1.2 M⊙ (K km s −1 pc 2 ) −1 , in this model. The increase in molecular hydrogen density and the reduction in inferred mass (relative to the values obtained in the previous model where the virialization condition was imposed) arise from our assumption that the velocity gradient in this model is greater than the velocity gradient obtained in the virialized case. For a fixed χ, the optical depth drops with increasing dv/dr (Equation [7]); since β, the probability of an emitted photon escaping, correspondingly increases, the radiation is less "trapped" and a higher density, nH 2 , is required to produce the observed CO excitation lines. Furthermore, since Ii,j ∝ βi,j M , a larger β implies that less mass is required to reproduce an observed set of line intensities. Therefore, assuming (dv/dr) unvirialized = 10 (dv/dr) virialized causes the optical depth, and consequently, the inferred mass, to drop by a factor of nearly 4 in this model. Optically thin [CII] In the two models above, where the CO and [CII] lines are assumed to be emitted from separate regions of the molecular clouds, the single detected ionized carbon line is insufficient in constraining the parameters of the LVG modeled [CII] region. We thus consider the optically thin regime of the [CII] line (β ≃ 1), such that where χ C + is the C + to H2 abundance ratio (fixed at a value of 10 −4 in these calculations) and xi represents the fraction of ionized carbon molecules in the i th energy level. Assuming that the fine-structure transition J=3/2→1/2 is due solely to spontaneous emission processes and collisions with orthoand para-H2, the equations of statistical equilibrium reduce to a simplified form and can be solved to obtain x (J =3/2) as a function of temperature and H2 number density. In Fig. 1, we have plotted the resulting mass-to-luminosity ratio, MH 2 /L [CII] , as a function of the kinetic temperature for several different values of nH 2 . Given the high [CII]/farinfrared luminosity ratio of L [CII] /LF IR = (1.7±0.5)×10 −3 in HDF 850.1 (Walter et al. 2012), it is reasonable to assume the ionized carbon is emitting efficiently and to thus consider the high temperature, large number density limit (T kin ≃ 500 K, nH 2 ≃ 10 4 cm −3 ). In this limit, the massto-luminosity ratio is ≈ 1.035 M⊙(K km −1 pc 2 ) −1 and the corresponding molecular gas mass of the C + region, using the detected line intensity of the ionized carbon line L [CII] = 1.1×10 10 L⊙, is found to be MH 2 ≈ 5.2×10 10 M⊙ . Uniformly Mixed CO, C + Virialized Region We next consider models in which the CO molecules and C + ions are mixed uniformly, such that the corresponding line emissions originate in gas at the same temperature, density, and velocity gradient. For these conditions, the chemistry is driven by cosmic-ray ionization and the density fractions ni/n for CO and C + depend on a single parameter, the ratio of the cloud density to the cosmic-ray ionization rate ζ (Boger & Sternberg 2005). For Galactic conditions with ζ ∼10 −16 s −1 , the C + to CO ratio is generally very small. However, in objects such as HDF 850.1 where the ionization rate may be much larger due to high star formation rates, this ratio may be enhanced significantly. Figure 2. Dependence of the density ratios, n C + /n CO (solid line) and n H 2 /n H (dashed line), on the H 2 ionization rate ζ at a fixed solar metallicity Z' =1, for our best-fit LVG parameters T kin = 160 K and n H 2 = 10 3 cm −3 ). As ζ increases, the abundance of C + relative to CO in the cosmic-ray dominated dark cores of interstellar clouds grows while that of H 2 to atomic hydrogen decreases. The desired value n C + /n CO ≈ 13 is obtained for ζ ≈ 2.5×10 −14 s −1 , at which point n H 2 ≈ 0.4n H . C + -C -CO interconversion in a purely ionization-driven chemical medium. For solar abundances of the heavy elements (Z ′ = 1), a fairly reasonable assumption given the high star formation rate observed in HDF 850.1, the cosmic-ray ionization rate required to achieve this high C + to CO ratio in a cloud with temperature T =160 K and density nH 2 =10 3 cm −3 is of order ζ ≃ 2.5×10 −14 s −1 (Figure 2). This is significantly enhanced compared to the Milky Way value, ζ ∼10 −16 s −1 , and is plausibly consistent with the fact that HDF 850.1 has a star-formation rate of 850 solar masses per year, a value which is larger than the measured Galactic SFR by a factor of 10 3 (Walter et al. 2012). Furthermore, for such a high cosmic-ray ionization rate of this magnitude, the hydrogen is primarily atomic for the implied LVG gas density. We find that the ratio of molecular hydrogen to atomic hydrogen in the interstellar clouds is nH 2 /nH ≈ 0.4 (Figure 2). We thus replace H2 with H as the dominant collision partner. Given the uncertain CO-H rotational excitation rates (see Shepler et al. 2007), we assume that the rate coefficients are equal to the rates for collisions with ortho-H2 (Flower & Pineau Des For 2010). We find that the observed CO and [CII] lines together are best fit with a temperature of 160 K, an atomic hydrogen number density of nH = 10 3 cm −3 (with a corresponding velocity gradient of 0.17 km s −1 pc −1 ), and an abundance ratio (relative to H) of 9.3×10 −5 and 7×10 −6 for C + and CO respectively. The column density that yields the correct line intensity magnitudes is 8.5×10 19 cm −2 , corresponding to an atomic hydrogen gas mass of MH ≈ 2.14×10 11 M⊙. The molecular hydrogen mass is then MH 2 = 2(nH 2 /nH)MH ≈ 1.72×10 11 M⊙ and the total gas mass estimate is Mgas ≈ 3.86×10 11 M⊙. The corresponding conversion factor in this model is α = 9.8 M⊙(K km s −1 pc 2 ) −1 , where α is now defined as the total gas mass to CO luminosity ratio. Reducing the fixed sum of abundance ratios (relative to H) of CO and C + by a factor of two, to χ CO + χ C + = 5×10 −5 , results in a best fit solution with a total gas mass of Mgas ≈ 4.57×10 11 M⊙, nearly 20% larger than the value obtained assuming χ CO + χ C + = 10 −4 . Uniformly Mixed CO, C + Unvirialized Region In the case where we assume gravitationally-unbound molecular clouds, we again find a best fit model with a C + to CO ratio of χ C + /χ CO ≈ 13, indicating that atomic hydrogen is the dominant collision partner in the LVG calculations. We thus calculate a three-dimensional grid of model CO and [CII] lines, varying T kin , nH , and the relative abundances χ [CII] and χ CO , with the constraints that χ CO + χ C + = 10 −4 and (dv/dr) unvirialized = 10(dv/dr) virialized = 1.7 km s −1 pc −1 . The observed set of CO and [CII] lines, assumed to have been emitted from the same region, are fit best with a temperature of 180 K, an atomic hydrogen number density of 10 3.4 cm −3 and an abundance ratio of 9.3×10 −5 and 7×10 −6 for C + and CO respectively. For this set of parameters, the beam-averaged H column density is well constrained to be NH ≈ 2.9×10 19 cm −2 . This corresponds to an atomic and molecular gas mass of MH ≈ 7.33×10 10 M⊙ and MH 2 ≈ 5.20× 10 10 M⊙ respectively, yielding a total gas mass estimate of Mgas ≈ 1.25×10 11 M⊙. The ratio of the a For each model parameter, the top row represents the unique, best fit value obtained for the specified model. The bottom row provides the range of parameter values that yield results consistent with the observed line intensity ratios within the error bars of the observed data points (Walter et al. 2012). b α here is defined as the ratio of total gas mass to CO(1-0) luminosity, α = Mgas/L' CO(1−0) . In the models where the CO and [CII] lines are assumed to be originating from separate regions, Mgas = M H 2 since estimates of the atomic hydrogen gas mass could not be obtained via the LVG calculations. In the models where the CO molecules and C + ions are assumed to be uniformly mixed, the total gas mass is the sum of the molecular and the atomic gas masses, Mgas = M H 2 +M H . total gas mass to the CO luminosity in this model is α = 5.1 M⊙(K km s −1 pc 2 ) −1 . COSMOLOGICAL CONSTRAINTS Our inferred gas masses enable us to set cosmological constraints. For a particular set of cosmological parameters, the number density of dark matter halos of a given mass can be inferred from the halo mass function. The Sheth-Tormen mass function, which is based on an ellipsoidal collapse model, expresses the comoving number density of halos n per logarithm of halo mass M as, (1 + 1 (aν 2 ) p )e −aν 2 /2 (14) where a reasonably good fit to simulations can be obtained by setting A = 0.322, a = 0.707, and p = 0.3 (Sheth & Tormen 1999). Here, ρm is the mean mass density of the universe and ν = δcrit(z)/σ(M ) is the number of standard deviations away from zero that the critical collapse overdensity represents on mass scale M. Integrating this comoving number density over a halo mass range and volume element thus yields N, the expectation value of the total number of halos observed within solid angle A with mass greater than some M h and redshift larger than some z, where H0 is the Hubble constant, DA is the angular diameter distance, and Ωm and ΩΛ are the present-day density parameters of matter and vacuum, respectively. Under the assumption that the number of galaxies in the field of observation follows a Poisson distribution, the probability of observing at least one such object in the field is then P = 1 − F (0, N ) where F (0, N ) is the Poisson cumulative distribution function with a mean of N . Given the detection of HDF 850.1, we can say that out of the hundreds of submillimetre-bright galaxies identified so far, at least one has been detected in the Hubble Deep Field at a redshift z > 5 with a halo mass greater than or equal to the halo mass associated with this source. This observation, taken together with the theoretical number density predicted by the Sheth-Tormen mass function, implies that an atomic model that yields an expectation value N can be ruled out at a confidence level of where the solid angle covered by the original SCUBA field in which HDF850.1 was discovered is ≃ 9 arcmin 2 (Hughes et al. 1998) and a ΛCDM cosmology is assumed with H0 = 70 km s −1 Mpc −1 , ΩΛ = 0.73, and Ωm = 0.27 (Komatsu et al. 2011). M h,min , the minimum inferred halo mass for HDF 850.1, is related to the halo's minimum baryonic mass component, a quantity derived in §3 via the LVG technique, in the following way, where the baryonic and the total matter density parameters are Ω b = 0.05 and Ωm = 0.27 respectively. Each model's estimate of the minimum baryonic mass associated with HDF 850.1 therefore corresponds to an estimate of N(5,M h,min ) and respectively yields the certainty with which the model can be discarded. The confidence with which models can be ruled out on this basis is plotted as a function of the minimum baryonic mass estimated by the model (solid curve in Figure 3). To check the consistency of the four LVG models considered in §3 with these results, dashed lines representing the masses derived from each model are included in the plot (upper left panel). Assuming the CO and [CII] molecules are uniformly mixed in virialized clouds results in a baryonic mass M b,min = 3.86×10 11 M⊙. The probability of observing at least one such source, with a corresponding halo mass M h 2.1×10 12 M⊙, is ∼ 7×10 −2 ; this model can thus be ruled out at the 1.8σ level (solid). The model which postulate separate virialized regions (dashed) can be ruled out with relatively less certainty, at the 1σ level. On the other hand, modeling the CO and [CII] emission lines as originating from mixed (dotted) or separate (dash-dot) unvirialized regions, results in minimum baryonic masses which are consistent with the constraint posed by equation (15). We expect to find N ∼ 1.5 and 10 such halos, respectively, at a redshift z 5. The fact that the expected average number of observed sources for the latter model is much higher than the actual number of sources observed may be due to the incompleteness of the conducted survey and is therefore not grounds for ruling out this model. The baryonic masses, obtained via the LVG method, represent conservative estimates of the total baryonic content associated with HDF 850.1. In particular, mass contributions from any ionized gas or stars residing in the galaxy are not taken into account, and in the models where a layered cloud structure is assumed, the atomic hydrogen mass component is left undetermined. Furthermore, ejection of baryons to the IGM through winds may result in a halo baryon mass fraction that is smaller than the cosmic ratio, Ω b /Ωm, used in this paper, resulting in a conservative estimate of the minimum halo mass. We therefore consider the effects on the predicted number of observed halos, N(M h,min ), if the minimum baryonic mass derived for each model is doubled, e.g. assuming a molecular-gas mass fraction of ∼ 1/2 (Tacconi et al. 2013). Using these more accurate estimates of HDF 850.1's baryonic mass, we find that the two models in which the virialization condition is enforced can be ruled out at the ∼ 2.7σ and 2σ level for the mixed and separate models, respectively (upper right panel). For comparison, we also consider the gas masses implied by the CO(1-0) line luminosities predicted by the two models where distinct CO and C + layers are assumed (lower left panel). Adopting a CO-to-H2 conversion factor of α = 0.8 (K km s −1 pc 2 ) −1 , we find that enforcing the cosmological constraint posed by the abundance of dark matter halos does not rule out either of the two models in which separate CO and C + regions were assumed, even if the minimum baryonic masses are doubled to account for neglected contributions to the total gas mass (lower right panel). If a conversion factor of α = 4.6 (K km s −1 pc 2 ) −1 is used, both models can be ruled out at the ∼ 1σ level and increasing these obtained masses by a factor of two drives up the sigma levels to ∼ 1.8σ for both models. SUMMARY In this paper, we employed the LVG method to explore alternate model configurations for the CO and C + emission lines regions in the high-redshift source HDF 850.1. In particular, we considered emissions originating from (i) separate virialized regions, (ii) separate unvirialized regions, (iii) uniformly mixed virialized regions, and (iv) uniformly mixed unvirialized regions. For models (i) and (ii) where separate CO and C + regions were assumed, the kinetic temperature, T kin , and the molecular hydrogen density, nH 2 , were fit to reproduce the two observed line ratios, I CO(6−5) /I CO(2−1) and I CO(6−5) /I CO(5−4) , for a fixed canonical value of the CO abundance (relative to H2), χ CO = 10 −4 . The column density of molecular hydrogen, NH 2 , was then fit to yield the correct line intensity magnitudes and the molecular gas mass was derived for each respective model. In models (iii) and (iv) where the CO molecules and C + ions were assumed to be uniformly mixed with abundance ratios that satisfied the constraint, χ CO + χ C + = 10 −4 , we found that a relatively for each model were doubled to account for neglected contributions to the total gas mass; models (a) and (c) can now be ruled out at the 2σ and 2.7σ levels respectively. Lower left panel : The vertical lines represent the mass values implied by the predicted CO(1-0) line luminosities from models (a) and (b) with a CO-to-H 2 conversion factor of α = 0.8 and 4.6 M ⊙ (K km s −1 pc 2 ) −1 . If these minimum baryonic mass values are then doubled (lower right panel ), both models can be ruled out at the ∼ 1.8σ level in the case where α = 4.6 M ⊙ (K km s −1 pc 2 ) −1 is adopted as the conversion factor. high ionization rate of ζ ≃ 2.5×10 −14 s −1 is necessary to reproduce the set of observed line ratios, {I [CII] /I CO(2−1) , I [CII] /I CO(5−4) , I [CII] /I CO(6−5) }. Since the hydrogen in a cloud experiencing an ionization rate of this magnitude is primarily atomic, two additional parameters, nH and NH , were introduced and the set of LVG parameters, {T kin , nH 2 , nH , χ CO , χ C + , NH 2 , NH }, were fit and used to obtain both a molecular and an atomic hydrogen gas mass for each model. The gas masses derived by employing the LVG technique thus represent conservative estimates of the minimum baryonic mass associated with HDF 850.1. These estimates were then used, together with the Sheth-Tormen mass function for dark matter halos to calculate the average number of halos with mass M h M b,min that each model predicts to find within the HDF survey volume. Given that at least one such source has been detected, we found that models (i) and (iii) can be ruled out at the 1σ and 1.8σ levels respectively. The confidence with which these models are ruled out increases if a less conservative estimate of the baryonic mass is taken; increasing the LVG-modeled gas masses by a factor of two to account for neglected contributions to the total baryonic mass, drives up these sigma levels to ∼ 2σ and 2.7σ respectively. Furthermore, model (iv) can now be ruled out at the 1σ level as well. We are therefore led to the conclusion that HDF 850.1 is modeled best by a collection of unvirialized molecular clouds with distinct CO and C + layers, as in PDR models. The LVG calculations for this model yield a kinetic temperature of 100 K, a velocity gradient of 1.6 km s −1 pc −1 , a molecular hydrogen density of 10 3 cm −3 , and a column density of 10 19 cm −2 . The corresponding molecular gas mass obtained using this LVG approach is MH 2 ≈ 5.16×10 10 M⊙. For this preferred model we find that the CO-to-H2 luminosity to mass ratio is α = 1.2 (K km s −1 pc 2 ) −1 , close to the value found for ULIRGs in the local universe.
2013-08-06T22:39:46.000Z
2013-02-27T00:00:00.000
{ "year": 2013, "sha1": "df552791cb895b5452a111ca76c6cabf781a6498", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/mnras/article-pdf/435/3/2407/3409820/stt1449.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "df552791cb895b5452a111ca76c6cabf781a6498", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
17478738
pes2o/s2orc
v3-fos-license
Analysis of equilibrium states of Markov solutions to the 3D Navier-Stokes equations driven by additive noise We prove that every Markov solution to the three dimensional Navier-Stokes equation with periodic boundary conditions driven by additive Gaussian noise is uniquely ergodic. The convergence to the (unique) invariant measure is exponentially fast. Moreover, we give a well-posedness criterion for the equations in terms of invariant measures. We also analyse the energy balance and identify the term which ensures equality in the balance. INTRODUCTION The Navier-Stokes equations on the torus with periodic boundary conditions forced by additive Gaussian noise are a reasonable model for the analysis of homogeneous isotropic turbulence for an incompressible Newtonian fluid. The equations share with their deterministic counterpart the well-known problems of well-posedness. It is reasonable, and possibly useful, to focus on special classes of solutions, having additional properties. This paper completes the analysis developed in [11], [12] and [13] (see also [1]). In these papers it was proved that it is possible to show the existence of a Markov process which solves the equations. Moreover, under some regularity and non-degeneracy assumptions on the covariance of the driving noise, it has been shown that the associated Markov transition kernel is continuous in a space W with a stronger topology (than the topology of energy, namely L 2 ) for initial conditions in W . In this paper we show that, under suitable regularity assumptions on the covariance, every Markov solution admits an invariant measure. Moreover, if the noise is non-degenerate, the invariant measure is unique and the convergence to the (unique) invariant measure is exponentially fast. We stress that similar results have been already obtained by Da Prato & Debussche [2], Debussche and Odasso [5] and Odasso [19], for solutions obtained as limits of spectral Galerkin approximations to (1.1), and constructed via the Kolmogorov equation associated to the diffusion. The main improvement of our results is that such conclusions are generically valid for all Markov solutions and not restricted to solutions limit to the Galerkin approximations (this would not make any difference whenever the problem is well-posed, though) and is general enough to be applied to different problems (see for instance [1]). Our analysis is essentially based on the energy balance (see Definition 2.4 and Remark 2.5), and in turn shows that such balance is the main and crucial ingredient. It is worth noticing that the uniquely ergodic results hold for any Markov solution, hence different Markov solutions have their own (unique) invariant measure. Well-posedness of (1.1) would ensure that the invariant measure is unique. We prove that the latter condition is also sufficient, as if only one invariant measure is shared among all Markov solutions, then the problem is well-posed. Finally, we analyse the energy balance for both the process solution to the equations and the invariant measure. Due to the lack of regularity of trajectories, the energy balance is indeed an inequality. We identify the missing term and, under the invariant measure, we relate it to the energy flux through wave-numbers. According to both the physical and mathematical understanding of the equations, this term should be zero. A non-zero compensating term from one side would invalidate the equations as a model for phenomenological theories of turbulence, and from the other side would show that blow-up is typically true. We stress that neither the former nor the latter statements are proved here. Details on results. In the rest of the paper we consider the following abstract version of the stochastic Navier-Stokes equations (1.1) above, (1.2) du + νAu + B(u, u) = Q 1 2 dW, where A is the Laplace operator on the three-dimensional torus T 3 with periodic boundary conditions and B is the projection onto the space of divergence-free vector fields with finite energy of the Navier-Stokes nonlinearity (see Section 2.1 for more details). Moreover, W is a cylindrical Wiener process on H and Q is its covariance operator. We assume that Q is a symmetric positive operator. We shall need additional assumptions on the covariance, as the results contained in the paper holds under slightly different conditions. Here we gather the different additional assumptions we shall use. ¿℄ there is α 0 > 1 6 such that A 3 4 Notice that each of the above conditions implies the following one. We shall make clear at every stage of the paper which assumption is used. The first main result of the paper concerns the long time behaviour of solutions to equations (1.2). We show that every Markov solution is uniquely ergodic and strongly mixing (Theorem 3.1 and Corollary 3.2). Moreover, under an additional technical condition (see Remark 2.5) we prove that the convergence to the (unique) invariant measure is exponentially fast (Theorem 3.3). We stress that uniqueness of invariant measure is relative to the Markov solution it arises from. As we do not know if the martingale problem associated to equations (1.2) is well-posed, in principle there are plenty of Markov solutions, and so plenty of invariant measures. In Section 4 we study a few properties of the set of invariant measures. In particular, we show the converse of the above statement, that is if there is only one common invariant measure for all Markov solutions, then the martingale problem is well posed (Theorem 4.6). We also give some remarks on symmetries for the invariant measures (such as translations-invariance). Finally, we analyse the energy inequality (given as Å¿℄ and Å ℄ in Definition 2.4, see also Remark 2.5). In particular, we identify the missing term in the inequality which, once added, provides the equality. For an invariant measure µ, we show that where 1 2 σ 2 is the rate of energy injected by the external force, ε(µ) = E µ [|∇x| 2 ] is the mean rate of energy dissipation and ι(µ) is the mean rate of inertial energy dissipation. We show also that ι(µ) is given in terms of the energy flux through wave-numbers (see Frisch [15]) as Acknowledgements. The author wishes to thank D. Blömker and F. Flandoli for the several useful conversations and their helpful comments. The author is also grateful to A. Debussche for having pointed out the inequality used in the proof of Theorem A.2. NOTATIONS AND PREVIOUS RESULTS 2.1. Notations. Let T 3 = [0, 2π] 3 and let D ∞ be the space of infinitely differentiable vector fields ϕ : R 3 → R 3 that are divergence-free, periodic We denote by H the closure of D ∞ in the norm of L 2 (T 3 , R 3 ), and similarly by V the closure in the norm of H 1 (T 3 , R 3 ). Let D(A) be the set of all u ∈ H such that ∆u ∈ H and define the Stokes operator A : D(A) → H as Au = −∆u. By properly identifying dual spaces, we have that The bi-linear operator B : V ×V → V ′ is defined as Temam [22] for a more detailed account of all these notations). Since A is a linear positive and self-adjoint operator with compact inverse, we can define powers of A. We define two hierarchies of spaces related to the problem, using powers of A. The first is given by the following spaces of mild regularity (since they are larger than the space V ), while the second is given by the following spaces of strong regularity, where θ is defined as Notice that for every ε 0 and α 0 as above, In the proof of most of the results of the paper we shall use repeatedly the following inequalities. where θ is the map defined in (2.3). If α = 1 2 , then B maps D(A Markov solutions to the Navier-Stokes equations. In this section we recall a few definitions and result from papers [12] and [13], with some additional remarks. 2.2.1. Almost sure super-martingales. We say that a process θ = (θ t ) t≥0 on a probability space (Ω, P, F ), adapted to a filtration (F t ) t≥0 is an a. s. super-martingale if it is P-integrable and there is a set T ⊂ (0, ∞) of null Lebesgue measure (that we call the set of exceptional times of θ ) such that for all s ∈ T and all t > s. is an a. s. super-martingale, then for every s ≥ 0 and every ϕ ∈ C ∞ c (R) with ϕ ≥ 0 and Supp ϕ ⊂ [s, ∞), Proof. Fix s ≥ 0 and consider a positive smooth map ϕ with compact support in [s, ∞). By a change of variable, using the a. s. super-martingale property, and in the limit as ε ↓ 0 we get (2.5). It is easy to see that the converse is true (that is, if (2.5) holds, then the process is an a. s. super-martingale) under the assumption that the σ -fields {F t : t ≥ 0} are countably generated and θ is lower semi-continuous (see [14]). Weak martingale solutions. Let Ω = C([0, ∞); D(A) ′ ), let B be the Borel σ -field on Ω and let ξ : Ω → D(A) ′ be the canonical process on Ω (that is, ξ t (ω) = ω(t)). A filtration can be defined on B as B t = σ (ξ s : 0 ≤ s ≤ t). Definition 2.4. Given µ 0 ∈ Pr(H), a probability P on (Ω, B) is a solution starting at µ 0 to the martingale problem associated to the Navier-Stokes is P-integrable and (E 1 t , B t , P) is an a. s. super-martingale; Å ℄ for each n ≥ 2, the process E n t , defined P-a. s. on (Ω, B) as is an a. s. super-martingale; Å ℄ µ 0 is the marginal of P at time t = 0. Remark 2.5 (enhanced martingale solutions). A slightly different approach has been followed in [1] to show existence of Markov solution for a different model (an equation for surface growth driven by space-time white noise), as the energy balance has been given in terms of an almost sure property. In the Navier-Stokes setting of this paper the property reads (some equivalent statements are possible as in [1]) Å¿¹ ×℄ there is a set T P x ⊂ (0, ∞) of null Lebesgue measure such that for all s ∈ T P x and all t ≥ s, where G is defined as z is the solution to the Stokes problem (A.2) and v = ξ − z. It is possible to show that, as in [1], there exist Markov solutions which additionally satisfy Å¿¹ ×℄. We shall assume this statement (see [14] for more details The map x → P x is in principle, from the above result, only measurable. The regularity of dependence from initial condition can be significantly improved under stronger assumptions on the noise, as shown by the theorem below. If (P x ) x∈H is a Markov solution, the transition semigroup 1 associated to the solution is defined as for every bounded measurable ϕ : H → R. Theorem 2.7 ([12,Theorem 5.11]). Under condition ℄ of Assumption 1.1, the transition semigroup (P t ) t≥0 associated to every Markov solution (P x ) x∈H is strong Feller in the topology of W α 0 . More precisely, P t φ ∈ C b (W α 0 ) for every t > 0 and every bounded measurable φ : H → R. The regularity result can be given more explicitly in terms of quasi-Lipschitz regularity (that is, Lipschitz up to a logarithmic correction) as in [13], albeit the estimate given there holds true only for α 0 = 3 4 (an extension to all values of α 0 > 1 6 can be found in [14]). EXISTENCE AND UNIQUENESS OF THE INVARIANT MEASURE In this section we prove existence of invariant measures by means of the classical Krylov-Bogoliubov method. Let (P x ) x∈H be a Markov solution and denote by (P t ) t≥0 its transition semigroup (see (2.6)). Let x 0 ∈ H and where δ x 0 is the Dirac measure concentrated on x 0 . It is known (see for example Da Prato & Zabczyk [3]) that any limit point of the family of probability measures (µ t ) t≥0 is an invariant measure for (P t ) t≥0 , provided that the family is tight in the topology where the transition semigroup is Feller. The convergence of transition probabilities to the unique invariant measure can be further improved if, under the same assumptions of above results, we deal with the enhanced martingale solutions introduced in Remark 2.5. This is a technical requirement that makes the proof of Theorem 3.3 below simple and, above all, feasible. for all t > 0 and x 0 ∈ H, where · ÌÎ is the total variation distance on measures. Remark 3.4. The proof of Theorem 3.3 above actually shows a slightly stronger convergence, namely for every x ∈ H and t ≥ 0, with same constants C exp and a, where the norm · V is defined on Borel measurable maps φ : H → R as [16] for details). From Theorem 13 of [13] and again from Theorem 4.2.1 of Da Prato & Zabczyk [4] we also deduce the following result. Proof. The result easily follows from the super-martingale property Å¿℄, Poincaré inequality and Gronwall's lemma (see for example [20] for details). Lemma 3.7. Assume ¾℄ of Assumption 1.1. Then there are C > 0, δ > 0 and γ > 0 depending only on ε 0 , α 0 , ν and σ 2 (but not on the Markov solution) such that for x 0 ∈ H and t ≥ 1, A slight modification of the argument in the proof below provides an inequality similar to that of the lemma also for t < 1. Fix t ≥ 1 and ε ≤ ε R , and let n ε ∈ N be the largest integer such that εn ε ≤ t. By the Markov property, where µ t is the measure defined in (3.1). Now, by Theorem A.1, for every and so, by using (A.5) and Chebychev inequality, We use the above inequality in (3.2) and we apply Theorem A.2 and the previous lemma, We use the energy inequality and the previous lemma to estimate the only complicated term in the inequality above, Since by (A.5) the dependence of ε R on R is like R −a , for some exponent a depending on ε 0 , we may choose R in such a way that for every t ≥ 1, In conclusion, the statement of the lemma is proved for initial conditions x 0 ∈ V ε 0 . If x 0 ∈ H, since for every s > 0 we know that ξ s ∈ V ε 0 , P x 0 -a. s., then by the Markov property, where we have used the previous lemma and this same lemma for initial conditions in V ε 0 . Finally, as s ↓ 0, the conclusion follows by the monotone convergence theorem. Proof of Theorem 3.1. Choose an arbitrary point x 0 ∈ H and consider the sequence of measures (µ t ) t≥1 defined by formula (3.1). Since where the constants δ and γ are those provided by the previous lemma, it follows by that same lemma that the sequence of measures is tight in W α 0 . 3.2. The proof of Theorem 3.3. As stated in the statement of the theorem, in this section we work with the enhanced martingale solutions defined in Remark 2.5. It means that the energy balance Å¿¹ ×℄ is available for proofs. Prior to the proof, we give a few auxiliary results, summarised in the following lemmas. In the first one we show that any solution enters in a ball of small energy with positive probability. Proof. Consider a value k 1 = k 1 (δ ), to be chosen later, and let A = {ω : For all ω ∈ A such that the inequality in Å¿¹ ×℄ (at page 6) holds, we have where we have set k 2 = c(k 4 1 + k 8 3 1 ) and k 3 = ck 8 3 1 . By the Poincaré inequality (the first eigenvalue of the Laplace operator on the torus T 3 is 1), and Gronwall's lemma ensures that If we choose k 1 and T 1 in such a way that The second lemma shows that with positive probability the dynamics enters into a (sufficiently large) ball of space V . For all ω ∈ A for which the inequality in Å¿¹ ×℄ (at page 6) holds, we can proceed as in the proof of Lemma 3.8 to get |v t | 2 where k 1 is small enough so that k 2 < ν. Next, we notice that the set {r ∈ [0, 1] : |v r | 2 V ≤ 2k 4 } is non-empty (its Lebesgue measure is larger than one half). So for each r 0 in such a set, |v r 0 | 2 V ≤ 2k 4 . Since the energy inequality Å¿¹ ×℄ holds, for a short time after r 0 , v coincides with the unique regular solution. We shall choose k 1 and δ small enough so that the short time goes well beyond 1. Indeed, using (2.1) (as in (A.6) with ε 0 = 1 4 ), we get for suitable universal constants c 1 and c 2 , and so, if ϕ(r) = |v r | 2 V + k 1 ) 2 ≤ 1 the solution to the differential inequality of ϕ is finite at least up to time 1 + r 0 . In particular, ϕ(1) ≤ (2c 1 ) − 1 2 and so by easy computations, We choose now the last term on the right-hand side of the above formula as In the last auxiliary lemma we show that the dynamics enters in a compact subset of W α 0 . This is crucial since the strong Feller property holds in the topology of W α 0 (Theorem 2.7). Lemma 3.10 (entrance time in a ball of high regularity). Assume ¿℄ from Assumption 1.1. Then there is β > 0 (depending only on α 0 ) such that for Proof. Given R 2 > 0, we choose β = θ ′′ − θ (α 0 ), T 3 and C as given in Lemma A.3. Notice that the set K = {y : |A β y| 2 is the time up to which all solutions starting at x coincide with the unique solution to problem (A.1), then Now, the conclusion follows from Lemma A.3. The first property follows from Theorem 13 in [13] (there equivalence is stated only for x ∈ W α 0 , but it easy to see by the Markov property that it holds for x ∈ H, as W α 0 is a set of full measure for each P(t, x, ·)). The second property follows from the strong Feller property, while the fourth property follows from Lemma 3.6. We only need to prove the third property. We fix R ≥ 1 and we wish to prove that there are T 0 = T 0 (R) and K = K(R) such that (3.4) inf We choose the value δ provided by Lemma 3.9 together with the time T 2 and value R 2 . Corresponding to the values R and δ , Lemma 3.8 gives a time T 1 . Moreover, corresponding to R 2 , Lemma 3.10 provides the time T 3 and value C. We set T 0 = T 1 + T 2 + T 3 , then if |x| 2 H ≤ R, using three times the Markov property, and the right-hand side is positive (and bounded from below independently of x) due to Lemma 3.8, 3.9 and 3.10. Finally, the constants C exp and a in the statement of the theorem are independent of the Markov solution since all computations either depend on the data (the viscosity ν, the strength of the noise σ 2 , etc., such as in Lemma 3.6) or are made on the regularised problem analysed in the appendix. FURTHER ANALYSIS OF EQUILIBRIUM STATES In the previous section we have shown that, under suitable assumptions on the driving noise, every Markov solution has a unique invariant measure. As in principle there can be several different Markov solutions, so are invariant measures. In the first part of the section we show that well-posedness of the martingale problem associated to (1.2) is equivalent to the statement that there is only one invariant measure, regardless of the multiplicity of solutions. In the second part we give some remarks on symmetries of invariant measures, while in the third part we analyse the energy balance. Proof. It is sufficient to prove that the finite dimensional marginals of P ⋆ and η s P ⋆ are the same. The case of one single time is easy, by invariance of µ ⋆ . We consider only the two-dimensional case (one can proceed by induction in the general case). Consider t 1 < t 2 , then by the Markov property and invariance of µ ⋆ , where in the above formula we have set In turns, the lemma above ensures that P ⋆ is the unique probability measure on Ω such that 1. P ⋆ is stationary, 2. P ⋆ is associated 2 to the Markov solution (P x ) x∈H . Uniqueness follows easily since µ ⋆ is the unique invariant measure of the Markov solution (P x ) x∈H and since the law of a Markov process is determined by its one-dimensional (with respect to time) marginal distributions (as in the proof of Lemma above). We shall see later on that for a special class of invariant measures this uniqueness statement can be strengthened (see Proposition 4.4). In general one can have several stationary solutions (see for example [20] for the definition and a different proof of existence) and possibly not all of them are associated to a Markov solution. Hence we define the two sets, . By the same properties that ensure existence of solutions (and following similar computations, see for example [12]), it is easy to see that I is a compact subset of Ω. Moreover, by Corollary 3.2, I m and hence I e are relatively compact in a much stronger topology. A short recap on the selection principle. It is necessary to give a short account on the procedure which proves the existence of Markov selection (namely, the proof of Theorem 2.6). We refer to [12] for all details. Given x ∈ H, let C (x) ⊂ Pr(Ω) be the set of all weak martingale solutions (according to Definition 2.4) to equation (1.2), starting at x. In the proof of Theorem 2.6 (see [12]) the sets C (x) are shrunken to one single element in the following way. Fix a family (λ n , f n ) n≥1 which is dense 2 We say that a probability measure P on Ω is associated to a Markov solution (P x ) x∈H if for every t ≥ 0, P| ω B t = P ω(t) for P-a. e. ω ∈ Ω, where (P| ω B t ) ω∈Ω is a regular conditional probability distribution of P given B t . in [0, ∞) ×C b (D(A) ′ ) and consider the functionals J n = J λ n , f n , where J λ , f is given by for arbitrary λ > 0 and f : D(A) ′ → R upper semi-continuous. Next, set All these sets are compact and their intersection is a single element (the selection associated to this maximised sequence), n∈N C n (x) = {P x }. Given now a probability measure µ on D(A) ′ , one can define the set C (µ) as the set of all probability measures P on Ω such that 1. the marginal at time 0 of P is µ; 2. there is a a map x → Q x : H → Pr(Ω) such that P = Q x µ(dx) and Q x ∈ C (x) for all x (in different words, the conditional distribution of P at time 0 is made of elements from sets (C (x)) x∈H ). We can now give the following extension to the selection principle. Proposition 4.3. Let (P x ) x∈H be the Markov selection associated to the sequence (λ n , f n ) n≥1 . Then the probability P µ = H P x µ(dx) is the unique maximiser associated to the sequence (λ n , f n ) n≥1 . More precisely, . . . . . . , J n (P µ ) = sup Proof. Since each Q ∈ C (µ) is given by Q = Q x µ(dx), for some x → Q x , by linearity of the map J 1 it easily follows that P µ ∈ C 1 (µ). Moreover, each Q ∈ C 1 (µ) has a similar structure: Q = Q x µ(dx) and Q x ∈ C 1 (x) for µ-a. e. x ∈ H. In fact, J 1 (Q x ) ≤ J 1 (P x ), µ-a. s., and J 1 (Q) = J 1 (P µ ), and so J 1 (Q x ) = J 1 (P x ), for µ-a. e. x. By induction, P µ ∈ C n (µ) and for each In conclusion, P µ ∈ C ∞ (µ) = C n (µ) and for each Q = Q x µ(dx) ∈ C ∞ (µ), Q x ∈ C ∞ (x), for µ-a. e. x ∈ H. But we know that each C ∞ (x) has exactly one element, P x , so that in conclusion the only element of C ∞ (µ) is P µ . 4.1.3. Connection with well-posedness. Now, if we are given a sequence (λ n , f n ) n∈N as above, the selection principle provides a Markov solution (P x ) x∈H . Corollary 3.2 ensures that this Markov solution has a unique invariant measure µ ⋆ . Moreover, from the proposition above, the stationary solution P ⋆ = P x µ ⋆ (dx) is the unique sequential maximiser of the sequence (J n ) n∈N on C (µ ⋆ ). This justifies, in analogy with the definition of (4.2) and (4.3), the definition of the following set, (4.4) I e = µ : µ is the invariant measure associated to a Markov solution obtained by the maximisation procedure for some sequence (λ n , f n ) n∈N , and obviously I e ⊂ I m ⊂ I . Proposition 4.4. If µ ⋆ ∈ I e , then the stationary solution P ⋆ associated to µ ⋆ is the unique stationary measure in C (µ ⋆ ). Proof. Since µ ⋆ ∈ I e , by definition there is a sequence (λ n , f n ) n∈N dense in [0, ∞) × C b (D(A) ′ ) such that P ⋆ maximises functionals J n = J λ n , f n (one after the other, as explained in Proposition 4.3). Now, if P ∈ C (µ ⋆ ) is a stationary solution, then and so J n ( P) = J n (P ⋆ ) for all n. By Proposition 4.3, it follows that P = P ⋆ . If we consider now Markov solutions as those obtained for the Navier-Stokes equations, namely each of them is strong Feller and irreducible on W α 0 , the previous result gives immediately a criterion for well-posedness. In few words, uniqueness of the invariant measures among Markov solutions is equivalent to well-posedness of the martingale problem. Corollary 4.5. Assume that every Markov selection is W α 0 -strong Feller and fully supported on W α 0 . If (P x ) x∈H and (P ′ x ) x∈H are two Markov selections, with (P x ) x∈H coming from a maximisation procedure, and they have the same invariant measure, then they coincide on W α 0 . Proof. Let P ⋆ and P ′ ⋆ be the stationary solutions associated to the two selections. If the two selections have the same invariant measure, it follows from the previous theorem that they have the same stationary solution, that is P ⋆ = P ′ ⋆ . It follows from this that their conditional probability distributions at time 0 coincide, By W α 0 -strong Feller regularity and irreducibility they coincide on every x ∈ W α 0 . We summarise the result in the following theorem. It follows easily from the previous corollary and from the fact that well-posedness of the martingale problem is equivalent to uniqueness of Markov selections (see Theorem 12.2.4 of Stroock & Varadhan [21]). Proof. We only have to prove that, given two Markov solutions (P x ) x∈H and (P ′ x ) x∈H , for every x ∈ V ε 0 we have P x = P ′ x . This statement holds for x ∈ W α 0 by the previous corollary. Fix ε 0 > 0 and x ∈ V ε 0 . Choose R >> |x| 2 V ε 0 , then, for every bounded continuous φ , by the Markov property, where P is the transition semigroup of (P x ) x∈H . The first term on the righthand side is independent of the selection, by the weak-strong uniqueness of Theorem A.1, hence and, by the blow-up estimate of Theorem A.1, as δ → 0, we get P t φ (x) = P t φ (x) for all φ and all t. Translations-invariance and other symmetries. In the analysis of homogeneous isotropic turbulence, for which equations (1.1) can be considered a model, it is interesting to consider equilibrium states invariant with respect to several symmetries (see for example Frisch [15]). Here we are interested in solutions which are translations-invariant (in the physical space). For every a ∈ R 3 , define on D ∞ the map m a : D ∞ → D ∞ as m a (ϕ)(x) = ϕ(a + x), x ∈ R 3 for any ϕ ∈ D ∞ . The map obviously extends to H and D(A α ) for each α. By composition, it extends to continuous functions on H (or D(A α ) for every α) and, by duality, to probability measures on H. It also extends to Ω as m a (ω)(t) = m a (ω(t)), t ≥ 0, ω ∈ Ω, and, by duality, to probability measures on Ω. A function (or a measure) is translations-invariant if it is invariant under the action of (m a ) a∈R 3 For every a ∈ R 3 , m a is a one-to-one map on I , on I m and on I e . 2. There is at least one translations-invariant measure in I . Proof. We first show that if P is the law of a solution to equations (1.2), then m a P is also a solution for every a ∈ R 3 . Since for every a ∈ R 3 the map m a is an isometry on H, the image of a cylindrical Wiener process on H is again a cylindrical Wiener process. The assumption on Q ensures now that the noise term is translations-invariant and so it is easy to check that all requirements of either Definition 2.4 or of any definition of solutions for the stochastic PDE (1.1) available in the literature (see for example Flandoli & Gatarek [9], we refer also to [20] as regarding stationary solutions) are verified. In particular, if P is stationary, then m a P is again stationary and so m a is a one-to-one map on I . Moreover, since I is closed and convex (see Remark 4.2), it follows that there exists a translations-invariant measure. Indeed, given µ ∈ I , there is a stationary solution P µ whose marginal is µ. Now, the probability measure is again a stationary solution and its marginal is translations-invariant, as m a+2πk = m a for every k ∈ Z 3 . We next prove that m a maps I m one-to-one. Let µ ⋆ ∈ I m and consider a Markov solution (P x ) x∈H having µ ⋆ as one of its invariant measures. Fix a ∈ R 3 and set Q x = m a (P m −a (x) ). It is easy to verify that (Q x ) x∈H is another Markov solution, since Moreover, m a (µ) is an invariant measure of (Q x ) x∈H . Finally, in order to show that m a maps I e one-to-one, we only need to find a maximising sequence for the solution (Q x ) x∈H defined above. Let (λ n , f n ) n∈N be a maximising sequence for (P x ) x∈H , then (λ n , f n • m −a ) n∈N is a maximising sequence for (Q x ) x∈H . We stress that in the proposition above existence of a translations invariant equilibrium measure is granted in I , but we do not know if such a measure belongs to I m . Notice finally that if problem (1.2) is well-posed, it follows easily that the unique invariant measure must be translations-invariant. Similar conclusions can be found for other symmetries of the torus, such as isotropy (invariance with respect to rotations, see for example [10] where such symmetries are discussed in view of a connections between homogeneous turbulence and equations (1.1)). The balance of energy. In the framework of Markov solutions examined in this paper, the balance of energy corresponds to the a. s. supermartingale property Å¿℄ (and, more generally, of Å ℄) of Definition 2.4. As clarified in [12], the two facts 1. the balance holds only for almost every time, 2. the balance is an inequality, rather than an equality, correspond to a lack of regularity, in time in the first case and in space in the second, of solutions to the equations (1.1). From the point of view of the model, such facts translate to a loss of energy in the balance. Generally speaking, the problem could be approached by using the Doob-Meyer decomposition (which may hold even in this case, where the energybalance process E 1 is not continuous and the filtration (B t ) t≥0 does not satisfy the usual conditions, see Dellacherie & Meyer [6]). We shall follow a different approach, due to the lack of regularity of trajectories solutions to the equations. We shall see that the bounded variation term in the decomposition of E 1 is a distribution valued process. Let a > 0 and define the operator L a = exp(−aA 1 2 ). Given a martingale solution P x starting at some x ∈ H, there is a Wiener process (W t ) t≥0 such that the canonical process ξ on Ω solves (1.2). The process L a ξ under P x is regular enough so that we can use the standard stochastic calculus. Given where σ 2 a = Tr[Q L 2a ], and so, by integrating in time, x -a. s. As a ↓ 0, the operator L a approximates the identity, so that by the regularity of ξ under P x , x -a. s. and in L 1 (Ω), where σ 2 = Tr[Q ]. In conclusion, the limit exists P x -a. s. and in L 1 (Ω), and defines a distributions-valued random variable. Moreover, J (ξ ) depends only on ξ (that is, on P x ) and not on the approximation operators (L a ) a>0 used. We finally have The previous computations and Lemma 2.3 provide finally the following result. In few words, the next theorem states that the term J (ξ ) plays the role of the increasing process in the Doob-Meyer decomposition of the a. s. super-martingale (E 1 t ) t≥0 (defined by property Å¿℄ of Definition 2.4). Theorem 4.8. Given a martingale solution P x , there exists a distributionvalued random variable J (ξ ), defined by (4.5), such that the (distributionvalued) process Moreover, J (ξ ) is a positive distribution, in the sense that for every ϕ ∈ C ∞ c (R) with ϕ ≥ 0 and Supp ϕ ⊂ [s, ∞), Proof. The first part of the theorem follows easily, since and so, using the above computation and formula (4.6), we get the conclusion. The second part is a consequence of the first part (the martingale property) and the fact that (E 1 t ) t≥0 is an a. s. super-martingale. Remark 4.9. The Itô formula applied to ϕ(t)|L a ξ | 2n H provides an analysis of the a. s. super-martingale E n , defined in property Å ℄ of Definition 2.4, similar to that developed above for E 1 and J (ξ ). [7] show that the energy equality holds for suitable weak solutions to the deterministic Navier-Stokes equations if one takes into account the additional term D, a distribution in space and time, obtained by means of the limit of space-time regularisations. Their computations in our setting would lead to a random distribution D(ξ )(t, x) in space and time and This is only formal because in principle our solutions are not suitable (see [20] for existence of suitable solutions in the stochastic setting). Moreover, they relate the quantity D to the four-fifth law in turbulence theory (see for instance Frisch [15]). 4.3.1. The mean rate of inertial energy dissipation. Consider a Markov solution (P x ) x∈H and let µ ⋆ be its unique invariant measure. Define the mean rate of energy dissipation as . We know that 2νε(µ ⋆ ) ≤ σ 2 . We can as well consider the expectation with respect to the stationary solution P µ ⋆ of the distribution J defined in the previous section. As µ ⋆ is an invariant measure, the distribution ϕ −→ E P µ ⋆ [ J (ξ ), ϕ ] is invariant with respect to time-shifts. Hence there is a constant ι(µ ⋆ ), that we call mean rate of inertial energy dissipation, such that We notice that, as a consequence of (4.7), 0 ≤ ι(µ ⋆ ). The quantity ι(µ ⋆ ) can be given as the expectation of (4.5). Notice that in this case the expectation in µ ⋆ and the limit in (4.5) commute. We give a different formulation of ι(µ ⋆ ) in terms of Fourier modes. As the definition of J (and hence of ι) is independent of the approximation (as long as the approximating quantities are regular enough, so that all the computations are correct), we use a ultraviolet cut-off in the Fourier space. For every threshold K, define the projection P l K of H onto low modes as P l x k e ik·x , and the projection onto high modes P h K = I − P l K . Applying Itô formula on ϕ(t)|P l K ξ t | 2 as in the previous section, taking the expectation with respect to P µ ⋆ and then getting the limit as K ↑ ∞ yields the following representation formula for ι(µ ⋆ ), is the sum of a finite number of terms (so we can use the anti-symmetric property of the non-linear term without convergence issues). In conclusion, Following Frisch [15,Section 6.2], the last term we have obtained in the formula above is the energy flux through wave-number K and represents then energy transferred form the scales up to K to smaller scales. From the previous section we know that ι(µ ⋆ ) ≥ 0, this is a consequence of property Å¿℄ of Definition 2.4. From a mathematical point of view, existence of invariant measures with ι(µ ⋆ ) > 0 would be an evidence for loss of regularity and, in turn, for blow-up. From a physical point of view, the energy flux through wave-numbers should converge to zero -hence, again we would expect ι(µ ⋆ ) = 0 -as the energy should flow through modes essentially only in the inertial range (we refer again to Frisch [15]). Proposition 4.11. We have 1. the map µ ⋆ → ε(µ ⋆ ) has a smallest element in I (solution of largest mean inertial dissipation), Proof. The first part follows easily as I is compact (see Remark 4.2) and µ → ε(µ) is lower semi-continuous for the topology with respect to which I is compact. As it regards the second part, we know by Corollary 3.2 that I m is relatively compact on C b (V ). Hence, if M is the largest value attained by ε on I m and (µ n ) n∈N is a maximising sequence, say ε(µ n ) ≥ M − 1 n , by compactness there is µ ⋆ such that, up to a sub-sequence, µ n → µ ∞ . Now As R → ∞, it follows that ε(µ ∞ ) ≥ M, hence ε(µ ∞ ) = M. We have not been able yet to prove the condition given in item (2) of previous proposition. We also remark that such measures of largest and smallest mean inertial dissipation may not be unique, as both functionals ε(·) and ι(·) are translations-invariant (see Section 4.2). Given a value ε 0 ∈ (0, 1 4 ], we consider the following problem in V ε 0 , (and τ (ε 0 , R) (ω) = ∞ if the above set is empty). The main aim of this section is to analyse the solutions to the above problems and their connections to the original Navier-Stokes equations (1.2). Before turning to the results on the regularised problem (A.1), we remark that in the proof of all results of this section we shall use the splitting u (ε 0 , R) = v (ε 0 , R) + z, where z solves the following linear Stokes problem and so v (ε 0 , R) solves the following equation with random coefficients A.1. The weak-strong uniqueness principle. We first extend the weakstrong uniqueness principle stated in [12,Theorem 5.4] to the above problem (A.1). This is the content of the following result. Theorem A.1. Assume condition ¾℄ of Assumption 1.1 and let ε 0 ∈ (0, 1 4 ] with ε 0 < 2α 0 . Then, for every x ∈ V ε 0 equation (A.1) has a unique martingale solution P (ε 0 , R) x , with Moreover, the following statements hold. 1. (weak-strong uniqueness) On the interval [0, τ (ε 0 , R) ], the probability measure P (ε 0 , R) x coincides with any martingale solution P x of the original stochastic Navier-Stokes equations (1.2), namely for every t ≥ 0 and every bounded measurable ϕ : H → R. 2. (blow-up estimate) There are c 0 > 0, c 1 > 0 and c 2 > 0, depending only on ε 0 , such that for every x ∈ V ε 0 with |x| 2 Proof. The proof is developed in four steps, which are contained in the following subsections. More precisely, in the first step we prove existence of solutions for problem (A.1), while in the second step we prove uniqueness. The weak-strong uniqueness principle is then proved in the third step and the blow-up estimate (A.5) is given in the fourth step. Step 1: Existence. We only show the key estimate for existence. Let z be the solution to the linear Stokes problem (A.2) and consider v (ε 0 , R) as above. The usual energy estimate provides (here we use u = u (ε 0 , R) and v = v (ε 0 , R) for brevity) where we have used interpolation inequalities and Lemma 2.2, with α = ε 0 . Notice that, by the choice of ε 0 with respect to α 0 , |A θ (ε 0 ) z| 2 H has exponential moments. In order to show (A.4), we show an a-priori estimate for the derivative in time d dt v (ε 0 , R) in L 2 (0, T ; D(A −( 1 4 −ε 0 ) )), for all T > 0. The continuity of u (ε 0 , R) then follows from this fact and continuity of z. The a-priori estimate follows by multiplying the equations by d dt A 2ε 0 − 1 2 v (ε 0 , R) , where we have used the same estimates as in (A.6) (and again u = u (ε 0 , R) and v = v (ε 0 , R) for brevity). Step 3: Weak-strong uniqueness The proof works exactly as in [12, Theorem 5.12], we give a short account for the sake of completeness. The proof is developed in the following steps. 1. The energy balance of w = u (ε 0 , R) − ξ , given by is an a. s. super-martingale under P x . 2. τ (ε 0 , R) is a stopping time with respect to the filtration (B t ) t≥0 . 3. The stopped process ( E t∧τ (ε 0 , R) ) t≥0 is again an a. s. super-martingale. 4. The previous step implies the conclusion. All the above steps can be carried out exactly as in the proof of Theorem 5.12 of [12] (the key point is that u (ε 0 , R) is continuous in time with values in V ε 0 with probability one). The only difference is in the last step, where the estimate of the non-linearity needs to be replaced by the following estimate, w, B( u (ε 0 , R) , u (ε 0 , R) ) − B(ξ , ξ ) H = − u (ε 0 , R) , B( w, w) H . Finally, the above estimate can be obtained as in (A.7). A.2. Moments of norms of stronger regularity. The proof of the following theorem is based on an inequality taken from Temam [22, Section 4.3, Part I] (see also Odasso [18]). Proof. If α 0 ≤ 1 4 , we choose ε 0 < 2α 0 (such condition is useless for all other values of α 0 ). The noise is not regular enough to let us work directly on u (ε 0 , R) , so we rely, as in the proof of the previous theorem, on v (ε 0 , R) = u (ε 0 , R) − z. Let p = 1 2 − ε 0 (the value of p could be slightly improved, but it is beyond our needs) and compute d dt where we have set u = u (ε 0 , R) and v = v (ε 0 , R) . The non-linear term can be estimated as in (A.6) to get The term in z is plain (see for example Da Prato & Zabzcyk [3]), while the term in | v| V ε 0 can be estimated by means of the energy inequality Å¿℄ of Definition 2.4. Finally, in order to prove (A.8), we use again the energy balance, since by Young's inequality v 2γ
2007-09-20T17:19:33.000Z
2007-09-20T00:00:00.000
{ "year": 2008, "sha1": "b8aacbe5bc1ace8c2c5c10e807c4093a5762e0be", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0709.3267", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b8aacbe5bc1ace8c2c5c10e807c4093a5762e0be", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
267810269
pes2o/s2orc
v3-fos-license
Diabetes mellitus in patients with heart failure and reduced ejection fraction: a post hoc analysis from the WARCEF trial Patients with heart failure with reduced ejection fraction (HFrEF) and diabetes mellitus (DM) have an increased risk of adverse events, including thromboembolism. In this analysis, we aimed to explore the association between DM and HFrEF using data from the “Warfarin versus Aspirin in Reduced Cardiac Ejection Fraction” (WARCEF) trial. We analyzed factors associated with DM using multiple logistic regression models and evaluated the effect of DM on long-term prognosis, through adjusted Cox regressions. The primary outcome was the composite of all-cause death, ischemic stroke, or intracerebral hemorrhage; we explored individual components as the secondary outcomes and the interaction between treatment (warfarin or aspirin) and DM on the risk of the primary outcome, stratified by relevant characteristics. Of 2294 patients (mean age 60.8 (SD 11.3) years, 19.9% females) included in this analysis, 722 (31.5%) had DM. On logistic regression, cardiovascular comorbidities, symptoms and ethnicity were associated with DM at baseline, while age and body mass index showed a nonlinear association. Patients with DM had a higher risk of the primary composite outcome (Hazard Ratio [HR] and 95% Confidence Intervals [CI]: 1.48 [1.24–1.77]), as well as all-cause death (HR [95%CI]: 1.52 [1.25–1.84]). As in prior analyses, no statistically significant interaction was observed between DM and effect of Warfarin on the risk of the primary outcome, in any of the subgroups explored. In conclusion, we found that DM is common in HFrEF patients, and is associated with other cardiovascular comorbidities and risk factors, and with a worse prognosis. Supplementary Information The online version contains supplementary material available at 10.1007/s11739-024-03544-4. Introduction Currently, approximately 530 million adults worldwide are living with diabetes mellitus (DM), translating to 10% of the general adult population [1].Heart failure (HF), however, affects up to 64 million people worldwide, with prevalence ranging between 1 and 3%.Incidences of both DM and HF are also rising [2,3], and as a result, DM and HF are often found together.Indeed, DM can foster the onset of HF [4]: epidemiological trends show that up to 30% of patients with HF also have DM, with figures higher in hospitalized patients, and increasing over the last decades [5,6]. The pathophysiology underlying the relationship between DM and HF is complex and only partially understood [7,8].DM promotes the onset and progression of HF through atherosclerosis, ischemic heart disease, and loss of myocardial function [8,9]; hyperglycemia itself has detrimental effects on the myocardium [10].Furthermore, DM can induce other risk factors for HF (or enhance their effects), including arterial hypertension, atherogenic dyslipidemia, thrombogenesis, and inflammation [8]. Among the detrimental effects of HF, the promotion of a hypercoagulable state has been repeatedly described [11].This contributes to the higher risk of ischemic stroke which is found in patients with HF [12], even in the absence of other known causes of thromboembolic risk, such as atrial fibrillation (AF) [13].DM bolsters thromboembolic risk [14] and has been described as a potential factor that defines a subgroup of patients with HF and reduced ejection fraction (HFrEF) that may be at particularly higher risk of stroke [15].Hence, it would be anticipated that different antithrombotic drugs may have different effects in the "high-risk" DM subgroup. The Warfarin versus Aspirin in Reduced Cardiac Ejection Fraction (WARCEF) trial compared warfarin vs. aspirin in patients with HFrEF and sinus rhythm and found no significant differences in the primary composite outcome of ischemic stroke, intracerebral hemorrhage, or death from any cause [16].A previous comprehensive analysis of WARCEF subgroups showed that the effect of treatment did not differ in patients with and without DM, both before and after adjustment for multiple covariates [17].Beyond this, however, the effects of DM in this context remain unclear. In this additional post hoc analysis of the WARCEF trial, our primary aim was to analyze the association between DM and prognosis of patients with HFrEF.We also explore whether there may be a different effect of warfarin vs. aspirin in some subgroups of DM patients. Methods Full details on the design, follow-up, and primary results of the WARCEF trial were previously reported [16,18].Briefly, the trial was conducted between October 2002 and January 2010 and enrolled 2305 patients with HFrEF.Patients eligible for inclusion were adults (≥ 18 years) with HFrEF (left ventricular ejection fraction ≤ 35% assessed by echocardiography, or radionuclide or contrast ventriculography within 3 months before randomization), normal sinus rhythm, and no contraindication to receive warfarin therapy; those with a clear indication for either warfarin or aspirin were not eligible for inclusion.Moreover, while patients in any functional class of the New York Heart Association (NYHA) classification could be included, patients in NYHA class I could account for ≤ 20% of the total sample size.Other eligibility criteria included a modified Rankin score ≤ 4 and planned treatment with beta-blockers, angiotensin-converting enzyme inhibitor (or angiotensin receptor blocker where indicated), or hydralazine and nitrates [16].The main exclusion criteria were conditions associated with a high risk of cardiac embolism, such as atrial fibrillation (AF), mechanical heart valve, endocarditis, or intracardiac mobile or pedunculated thrombus.Follow-up was performed with an initial planned maximum duration of 5 years, further extended to 6 years; the trial's primary outcome was the composite of ischemic stroke (IS), intracerebral hemorrhage (ICH), or death from any cause.The study adhered to the principles of the Declaration of Helsinki; the study protocol was approved by the international review boards and ethics boards of participating centers, and written informed consent was provided by all patients.The trial was registered at ClinicalTrials.gov(NCT00041938). For each patient, baseline information on medical history and comorbidities, including the presence of DM, was collected using the customized Web-based WARCEF data management system.For this analysis, we included all patients with data available on the presence of DM at baseline. Study outcomes Full details on the outcome definition and adjudication in WARCEF are reported elsewhere [16,18].The aim of the WARCEF trial was to compare warfarin vs. aspirin, on a primary composite outcome of ischemic stroke, intracerebral hemorrhage, or death from any cause, analyzed in a time-to-first event fashion.In this post hoc analysis, we aimed to evaluate the association between DM and prognosis of patients with HFrEF, on the primary outcome as defined in the WARCEF trial.We also evaluated other exploratory secondary outcomes (i.e., the individual components of the primary composite outcome: all-cause death, IS, or ICH) and also explored if there was a different effect of warfarin vs. aspirin in some subgroups of patients with DM. Statistical analysis Continuous variables were expressed as mean and standard deviation (SD), and differences were evaluated using Student's t-test.Categorical variables were reported as counts and percentages, and differences were evaluated using the chi-square test. To evaluate factors associated with DM at baseline, we performed a multiple logistic regression analysis.Covariates included were age and Body Mass Index (BMI) (both modeled as restricted cubic splines with 4 knots, with age = 65 years and BMI = 25 kg/m 2 as references), sex, smoking habit (current vs. ex/never), race or ethnic group, NYHA class (I-II vs. III-IV), and history of hypertension, stroke/ transient ischemic attack (TIA), myocardial infarction (MI), peripheral vascular disease (PVD), and atrial fibrillation (AF).Results were reported as adjusted odds ratio (aOR) and corresponding 95% confidence intervals (CI) for categorical variables; the relationship between continuous variables and aOR and 95% CI for the presence of DM was reported graphically. For both primary and exploratory secondary outcomes, incidence rates (IR) and corresponding 95% CI were reported, according to the presence of DM.To analyze the association between history of DM and the risk of outcomes, we used multiple adjusted Cox regression models.Covariates included were age and BMI (both modeled as restricted cubic splines with 4 knots), sex, treatment allocation (warfarin or aspirin), smoking habit (current vs. ex/never), race or ethnic group, NYHA class (I-II vs. III-IV), and history of hypertension, stroke/TIA, MI, PVD, and AF.Results were reported as adjusted hazard ratio (aHR) and corresponding 95% CI. Additionally, we evaluated the effect of DM on the primary composite outcome in relevant subgroups of patients (i.e., according to age, sex, NYHA class, race/ethnicity, smoking status, history of hypertension, stroke/TIA, MI, PVD, and AF); we also explored if the effect of the study drugs (i.e., warfarin vs. aspirin) on the risk of the primary composite outcome was different in patients with vs. without DM, through an interaction analysis, stratified by clinical relevant characteristics (age, sex, NYHA class, history of hypertension, stroke/TIA, MI, and PVD). A two-sided p < 0.05 was considered statistically significant.All analyses were performed using R 4.2.3 (R Core Team, Vienna, Austria) for Windows. Results Among 2305 patients originally enrolled in the WARCEF trial, 2294 (99.5%, mean age 60.8 (11.3) years, 19.9% females) had available data on the presence of DM at baseline and were included in this analysis.Of these, 722 (31.5%) had DM. Baseline characteristics according to the presence of DM are reported in Table 1.Patients with DM were older (62.5 (9.8) years vs 60.0 (11.9) years, p < 0.001) and had higher BMI (30.8 (6.2) vs 28.4 (5.7) kg/m 2 , p < 0.001); they also showed a higher prevalence of non-Hispanic Black and other ethnicities and most comorbidities, including hypertension, MI, and history of stroke/TIA.Patients with DM also showed worse symptoms and lower rates of current smoking or alcohol use. When we analyzed factors associated with a diagnosis of DM at baseline, age and BMI were nonlinearly associated with odds of DM (Fig. 1; panel A and B; p for nonlinearity < 0.001 and 0.009, respectively).Specifically, odds of DM decreased with age below and above 65; conversely, the likelihood of DM increased sharply for BMI between 25 and 30 kg/m 2 , reaching a plateau thereafter.We also found that non-Hispanic Black (OR [95% CI]: 1. 42 were all associated with higher odds of diagnosis of DM at baseline.Conversely, current smoking status showed an inverse association (Fig. 1, panel C). Results of the Cox regression analyses for the risk of primary and exploratory secondary outcomes are reported in Table 2. DM was associated with a higher hazard of Subgroup analyses for the risk of the primary outcome are reported in Fig. S1 in Supplementary Materials.No statistically significant interaction was observed for any of the characteristics explored and the risk of all-cause death, IS, and ICH in patients with vs. without DM.Nonetheless, some evidence for a higher magnitude of DM effect was observed in patients aged 75 or more (p int = 0.158) and in females (p int = 0.105). Finally, when we analyzed the interaction between DM and the effect of randomized treatment on the risk of the primary composite outcome, stratified by relevant subgroups, we confirmed the previous finding [17] of no statistically significant interaction (Fig. S2 in Supplementary Materials). Discussion In this post hoc ancillary study of the WARCEF trial, our principal findings are as follows: (1) DM was common in patients with HFrEF and was associated with other risk factors, including higher BMI, symptoms, and cardiovascular comorbidities, and (2) DM was associated with a significantly higher risk of the primary composite outcome during follow-up, with some evidence of higher effect exerted by DM in elderly and females patients.We also confirmed the previous observation that DM was not associated with a significantly different effect of warfarin vs. aspirin on the risk of the primary composite outcome, in any of the subgroups explored, despite the higher risk profile of patients with DM. The prevalence of DM that we found in our cohort is similar to those observed in other trials performed in patients with HFrEF patients [19,20].This confirms that DM is a highly prevalent disease in this clinical setting; moreover, we found some evidence of ethnic differences in the prevalence of DM, with non-White patients being more likely to present with DM at baseline, in line with previous findings [21].We also showed how DM is associated with more severe symptoms and with a higher burden of cardiometabolic conditions.Although the cross-sectional nature of our analysis does not allow for inference on whether these results are directly attributable to a DM-specific effect, these results suggest that patients with DM and HFrEF present with a more complex phenotype, in line with the hypothesis that DM can foster the occurrence of HF through several pathways [22]. We also observed an association between DM and the risk of the primary composite outcome; similar results were found when considering death as an individual event.Conversely, only nonsignificant results were observed for the other two components (IS and ICH), perhaps due to the relatively low incidence of these events in this trial.These results appear consistent with previous evidence arising from randomized clinical trials (RCTs) [23] and also from real-world observational studies [24,25], which showed a detrimental effect of DM on the prognosis of patients with HF.In the subgroup analyses, we found broadly consistent effects of DM on the risk of the primary outcome across relevant subgroups.We, however, found some evidence of a greater detrimental effect of diabetes on the risk of the primary outcome in patients ≥ 75 years and in women: In these subgroups, DM doubled the risk of the primary outcome, although without a statistically significant interaction. These results expand previous evidence on how DM influences outcomes in patients with HFrEF.Indeed, previous studies already showed how women are disproportionally affected by the detrimental effects of DM on quality of life and outcomes [26].Our results, although without reaching statistical significance, suggest that some subgroups of HFrEF patients may be more prone to the consequences of the DM-HFrEF interaction.While further evidence is needed to confirm and expand these observations, our analysis provides insights that may be Fig. 2 Kaplan-Meier curves for the primary composite outcome according to the presence of diabetes mellitus at baseline.Log-rank p < 0.001 useful in identifying those patients who may need closer surveillance.Indeed, female representation in clinical trials of patients with HF has been repeatedly advocated as a potential area of improvement [27,28], and sex-based undertreatment [27,29] could also contribute to these results.Similar considerations may be applied to elderly patients [30,31]. When we evaluated how DM modified the effect of the randomized treatment on the risk of the primary outcome in relevant subgroups, we found no statistically significant interaction, reproducing the results found in the earlier WARCEF subgroup analysis, which showed no overall interaction between DM at baseline and effect of warfarin vs. aspirin [17], as noted above.Of note, these findings are also consistent with those of the COMMANDER-HF trial, which randomized patients with recent worsening HFrEF, sinus rhythm, and coronary artery disease to receive lowdose rivaroxaban or placebo on top of antiplatelet therapy [32]: while subgroup analyses did not show differences in patients with DM, some evidence of a potential lower effect of anticoagulation in patients with DM was also observed, similar to our analysis [32].We expanded these previous observations, showing that these results are consistent across a wide range of subgroups, who may present different risks of adverse events.Although our analysis of the interaction by subgroups was limited by the overall low power to detect differences, these results still provide interesting preliminary evidence to foster future research. Indeed, several hypotheses may explain our findings.Platelet activity is enhanced in patients with DM due to several mechanisms [33,34], which also include upregulation of Nox2: This has been previously described in patients with DM and linked with an increased risk of cardiovascular events in these subjects [35,36].Overall, the role of platelets is currently considered crucial in the pathophysiology of thrombosis in patients with DM [37].Indeed, although a potential lower efficacy of aspirin has been hypothesized in patients with DM (also due to accelerated platelet turnover [38,39]), aspirin is still widely used and recommended for the prevention of cardiovascular events in patients with DM and particularly for secondary prevention [37].The central role of platelets in the pathophysiology of thrombotic events in patients with DM may contribute to explain our findings.Nonetheless, oral anticoagulation may provide some potential advantages, as shown by a post hoc analysis of the COMMANDER-HF trial, in which the use of lowdose anticoagulant and antiplatelets was able to reduce the risk of thromboembolic events, although these were not the main determinants of morbidity and mortality in patients recruited in the trial [40].While current evidence does not support the implementation of such approaches in clinical practice, future studies may be able to identify subgroups of DM patients who may gain some benefit from more complex antithrombotic strategies. Taken together, our results have clinical implications.We showed that patients with DM and HFrEF are more complex, more symptomatic, and with a higher burden of cardiovascular diseases compared to patients without DM.This interplay impacts prognosis, an effect which we found driven by allcause mortality.Of note, antithrombotic treatment received did not influence prognosis in DM-HFrEF patients, despite their high risk of thromboembolic events.This may support the hypothesis that the complexity of patients with DM and HFrEF requires further efforts to improve prognosis and a more comprehensive and holistic management.Our results appear therefore consistent with recent guidelines recommendations, which call for the implementation of multidisciplinary and integrated care approaches in HFrEF patients [41], and with recent evidence which showed how the overall burden of morbidity, frailty, and complexity (encompassing also social determinants of health) represent powerful determinants of adverse outcomes in HF patients [42][43][44]. Strength and limitations We acknowledge some limitations.First, this is a post hoc, non-prespecified analysis of a randomized trial; therefore, we may did not have adequate power to detect differences, especially regarding subgroup and interaction analyses.Results should therefore be interpreted with caution and as hypothesis-generating.Nonetheless, our findings appear consistent with previous evidence and have biological plausibility.WARCEF collected DM status at baseline but not other potentially relevant factors, such as duration of disease and glycemic control; we also did not have data regarding drugs for the treatment of DM.Of note, the WARCEF trial was conducted 20 years ago, when treatment options and recommendations for both DM and HFrEF were different and more limited; therefore, we were unable to explore the effect of potentially interesting drugs (such as sodium-glucose cotransporter-2 inhibitors or glucagon-like peptide-1 receptor agonists) on the relationship between DM and HFrEF.We also acknowledge the risk for other potential unaccounted confounders, which effect we cannot exclude, although we adjusted our regression models to account for the most relevant confounders.Finally, we focused our analysis on the primary composite outcome of all-cause death, IS, and ICH, using a time-to-first event approach, as in the main trial analysis.Our exploratory secondary outcomes were therefore represented by the individual components of the composite endpoint, and we observed low rates for nonfatal events, thus reducing our power to detect differences in patients with and without DM.As these analyses were also not adjusted for multiple comparisons, the results should be interpreted with caution and as hypothesis-generating. Fig. 1 Fig. 1 Association between baseline characteristics and diagnosis of diabetes mellitus at baseline.Panel A: age (p for nonlinearity < 0.001); Panel B: Body Mass Index (p for nonlinearity = 0.009); Table 1 Baseline characteristics according to the presence of diabetes Mellitus at baseline ACE: angiotensin-converting enzyme, ARB: angiotensin receptor blocker, BMI: Body Mass Index, NYHA: New York Heart Association functional class, SD: standard deviation † Data are for the use of these medications before the patients underwent randomization the primary composite outcome (aHR [95% CI]: 1.48 [1.24-1.77])andall-causemortality (aHR [95% CI]: 1.52[1.25-1.84]).No statistically significant differences were observed for the risk of IS and ICH. Table 2 Event count and incidence rates for the risk of the primary and secondary outcomes according to the diagnosis of diabetes mellitus aHR: adjusted hazard ratio, CI: confidence interval, IR: Incidence Rate
2024-02-24T06:17:53.486Z
2024-02-23T00:00:00.000
{ "year": 2024, "sha1": "a48e2af82ab6cccb835af8166d04aa6924f088f8", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11739-024-03544-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "25a086659098e8d1780e253a302ea62773635acb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253841425
pes2o/s2orc
v3-fos-license
Compression syndromes of the popliteal artery due to intramuscular ganglion cyst of the biceps femoris: A case report and literature review Intramuscular ganglion cyst (IMGC) is a very rare lesion with an unidentified pathogeny that originates within the muscle. We encountered a case of 49-year-old man who complained of intermittent claudication in the right lower limb for 2 months. An intramuscular ganglion cyst in the biceps femoris muscle was diagnosed and located by Computed tomography angiography (CTA) and magnetic resonance imaging (MRI), which compressed the popliteal artery and resulted in ischemia in the right lower limb. Six months after surgical resection, there was no recurrence of the cyst and the popliteal artery was patency. We describe this case with a review of the relevant literature. Introduction A ganglion cyst is a common tumor-like lesion arising from various soft tissues that is generated by mucoid degeneration of the joint capsule, tendon, or tendon sheath (1). It usually occurs near the joints and ligaments of the extremities, especially the dorsal wrists and ankles, and occurs in young women between 20 and 40 years old (2,3). However, an intramuscular ganglion cyst which originates within the muscle belly itself and generates symptoms of arterial compression concomitantly is a rare lesion (4,5). As previously reported, diagnosis, localization, and complete resection of intramuscular tendon sheath cysts are challenging, and surgical exploration and pathological diagnosis are also necessary (6). Computed tomography angiography (CTA) and magnetic resonance imaging (MRI) can distinguish ganglion cysts from other soft tissue tumors and tumor-like lesions and provide particular information to determine the location of lesions. As far as we know, this is the first case report of an intramuscular ganglion cyst arising from the biceps femoris. Case presentation A 49-year-old male was admitted to hospital with intermittent claudication of the right lower limb for 2 months, with a limping distance of 100 m and no resting pain (Rutherford III), which is an indication for surgery (7). His past medical history includes coronary artery disease and hyperlipidemia. Physical examination revealed that the popliteal, dorsalis pedis, and posterior tibial arteries were not observed. His skin color, skin temperature, and lower limb appearance were basically normal, and the capillary filling time of the bilateral toes was < 2 s. Ultrasound showed reduced flow velocity in the right lower limb artery, superficial femoral artery (34 cm/s), popliteal artery (24 cm/s), anterior tibial artery (20 cm/s), posterior tibial artery (17 cm/s), and peroneal artery (24 cm/s), with absent triphasic waves (Figure 1). CTA suggests severe stenosis and almost complete occlusion of the P1 segment of the popliteal artery (Figures 2A-C). MRI suggested that the popliteal artery was compressed by an oval cyst with high signal intensity on T2-weighted, which is 1.7 * 2.5 * 3.1 cm in volume (Figures 2D-G). A diagnosis of popliteal artery entrapment syndrome is proposed. Exploration of the right popliteal artery and cystectomy were performed under epidural anesthesia. The popliteal artery was connected to a tendon sheath cyst on the medial aspect of the long distal head of the biceps femoris tendon with a smooth, tough wall, and jelly-like contents ( Figure 3A). After cyst removal, blood flow of popliteal artery and triphasic waves were restored (Figures 3B,C). Postoperative pathology suggests a ganglion cyst (Figure 4). At the 6-month postoperative follow-up, the patient had no intermittent claudication, good arterial pulsation in the lower limbs and no significant ultrasound abnormalities. Discussion Ganglion cyst (GC) is a common and frequent clinical condition, but cases of tendon cysts compressing the surrounding blood vessels are rarely reported. Ganglion cysts that occur within the muscle belly and are associated with popliteal artery compression are extremely rare. Limited data available make them prone to misdiagnosis and underdiagnosis, and preoperative imaging to determine their origin and nature can be difficult. Moreover, previous studies were unable to generate valuable research to develop a uniform and effective treatment due to the small number of cases included (6,(8)(9)(10)(11)(12)(13)(14). Entrapment neuropathy is a frequent clinical symptom because of its anatomical location. Also, the popliteal vascular components are subject to compression, including the vein, which is next most medially located and easily compressible compared with the artery, which is most lateral and least frequently involved (15). The popliteal artery is deeper and stiffer than the vein and requires greater force to produce compression, so the incidence of this syndrome is minimal. Only in rare cases, when the popliteal artery is compressed by the cyst in a pulsatile manner, the patient may develop lower limb claudication due to intermittent ischemia of the lower limb (16)(17)(18)(19)(20). Venous compression is not mentioned in reported cases of arterial occlusion, but the vein may also be compressed and ignored because the symptoms of arterial compression are of more clinical importance. After the removal of cysts, the pressure on any of the blood vessels is relieved. The primary differential diagnosis is cystic adventitial disease (CAD), which typically occurs in otherwise healthy, FIGURE 1 Preoperative ultrasonography. The spectral pattern was changed and triphasic waves were absent. Frontiers in Cardiovascular Medicine 02 frontiersin.org Preoperative CTA and MRI. (A-C) CTA showed severe stenosis of the popliteal artery in the right lower extremity. (D-G) MRI suggested that the popliteal artery was compressed by an oval cyst with high signal intensity on T2-weighted, which is 1.7*2.5*3.1 cm in volume. middle aged male patients, causing symptoms of sudden-onset progressive intermittent claudication. Although four theories have been proposed, the etiology of CAD is still not fully understood (21, 22). During vascular embryonic development, undifferentiated mesenchymal cells are incorporated into the arterial wall. It is these mucin-secreting mesenchymal cells that subsequently produce mucoid material, from which epitaxial cysts arise. This hypothesis is considered to be the most reasonable explanation for CAD. Also, the etiology of intrasynovial ganglion cysts is unknown. Repeated damage to the tendon sheath with subsequent cystic changes may be the cause of intrathecal ganglion cysts, as tenosynovitis or associated tendon tears are often seen around ganglion cysts (9). During the operation, it was clear that the cyst originated from the muscle belly rather than the popliteal artery, supporting the final diagnosis. There is controversy regarding the final diagnosis of the disease. Popliteal artery depression syndrome (PAES) is defined as a group of symptoms in which there is a congenital anatomical abnormality between the popliteal artery and its surrounding muscles or bundles of tendons and fibrous tissues (23). Classification of PAES, currently used and widely accepted, was proposed by Love and Whelan (24) and revised by Rich et al. (25). In terms of this system, type I is associated with an aberrant medial arterial course around the normal medial head of the gastrocnemius muscle. In type II, an abnormal medial head of the gastrocnemius inserts laterally on the distal femur and displaces the popliteal artery medially. Type III is depicted by an aberrant accessory slip from the medial head of the gastrocnemius muscle that wraps around the popliteal artery. In type IV, the popliteal artery is entrapped by the popliteus muscle. In type V, the popliteal vein is also involved. Type VI is considered functional and is recognized in the presence of a normally positioned popliteal artery that is entrapped by a normally positioned but hypertrophied gastrocnemius muscle. Characteristics of this patient do not conform to any types of PAES. Popliteal cysts are also known as Baker cysts, a general term for synovial fluid cysts in the popliteal fossa that occur in the medial head of the semimembranosus bursa (gastrocnemiosemimembranosus bursa, GSB). Secondary popliteal cysts are most often seen in adults and the cysts tend to be connected to the knee joint cavity. Sansone et al. performed a retrospective analysis of 1,001 cases of MRI for various reasons and found popliteal cysts in 4.7-37% of these cases, all of which communicated with the joint cavity. The patient in this case originated within the biceps femoris muscle belly and was not a synovial bursa of the medial head of the semimembranosus and gastrocnemius muscles and did not communicate with the joint cavity and was not strictly a popliteal cyst (26). Several treatment options are available if the diagnosis of PAES has been made, with the treatment objective being to release the popliteal artery from compression and preserve popliteal arterial flow (27). Conventional surgery, endovascular surgery, thrombolysis, or a combination of these modalities are all reasonable treatment options depending on the patient's clinical symptomology and anatomy (28). If the artery is occluded, stenotic, or aneurysmal, vascular reconstruction is mandatory in addition to the division of any entrapping structure (29). In this patient, after intraoperative resection of the cyst, the popliteal artery blood flow was confirmed by ultrasound, and no further vascular repair or autograft of the great saphenous vein was performed. Conclusion Here we describe a patient with an IMGC of the biceps femoris muscle compressing the popliteal artery, which could not be diagnosed preoperatively and highlighted the necessity and difficulty of differential diagnosis. Intraoperative exploration and postoperative histopathology are key to the diagnosis of IMGC. In our case, intraoperative ultrasound did not reveal any abnormality in the popliteal artery and there were no clinical symptoms at the 6-month follow-up, and the follow-up ultrasound was normal. In cases where the popliteal artery has been compressed for a short period of time and where there is no thrombosis, intimal thickening, or aneurysm formation, intraoperative ultrasound can be used to determine the flow velocity and flow in the popliteal artery, and if the flow is normal, decompression of the popliteal artery alone can be performed without wall repair or saphenous vein grafting. Data availability statement The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. Ethics statement Written informed consent was obtained from the participant/s for the publication of this case report. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Author contributions KZ, WY, WZ, and CL contributed to conception and design of the study. KZ wrote the first draft of the manuscript and drew the illustrative figures. WY, HR, SW, and MS had the acquisition, analysis or interpretation of clinical data for the work. All authors contributed to manuscript revision, read, and approved the submitted version.
2022-11-25T14:05:15.015Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "2b5d7d1981e9dd01d2da5d3d59a942061dabe857", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2b5d7d1981e9dd01d2da5d3d59a942061dabe857", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16041355
pes2o/s2orc
v3-fos-license
Engineering of frustration in colloidal artificial ices realized on microfeatured grooved lattices Artificial spin ice systems, namely lattices of interacting single domain ferromagnetic islands, have been used to date as microscopic models of frustration induced by lattice topology, allowing for the direct visualization of spin arrangements and textures. However, the engineering of frustrated ice states in which individual spins can be manipulated in situ and the real-time observation of their collective dynamics remain both challenging tasks. Inspired by recent theoretical advances, here we realize a colloidal version of an artificial spin ice system using interacting polarizable particles confined to lattices of bistable gravitational traps. We show quantitatively that ice-selection rules emerge in this frustrated soft matter system by tuning the strength of the pair interactions between the microscopic units. Via independent control of particle positioning and dipolar coupling, we introduce monopole-like defects and strings and use loops with defined chirality as an elementary unit to store binary information. G eometric frustration emerges in disparate physical and biological systems that span from disordered solids 1 , to trapped ions 2 , ferroelectrics 3 , microgel particles 4 , high-T c superconductors 5 , folding proteins 6 and neural networks 7 . When topological constraints between the individual elements impede the simultaneous satisfaction of all local interaction energies, the system is geometrically frustrated, featuring a low-temperature residual entropy and a large ground state (GS) degeneracy 8 , as observed in water ice 9 and in rare-earth pyrochlore oxides, called spin ice [10][11][12] . On the theoretical side, topologically frustrated spin systems have been scrutinized for a long time, dating back to the work of Wannier 13 on the Ising model applied to a triangular lattice, in which the system cannot accommodate three spins on each plaquette in such a way that all antiferromagnetic couplings are minimized. In pyrochlore crystals, the situation is similar: the rare-earth ions carry a net magnetic moment, and they are located on the sites of a lattice of corner-sharing tetrahedra 10 . At each vertex, the moments can point either towards the centre of the tetrahedron or away from it, and pairs of spins align in the low-energy head-to-tail configuration. The degenerate GS of pyrochlore crystals follows the ice-rules 14 where two spins point towards the centre of the tetrahedron and two away from it. The ice rules were first introduced by Bernal and Fowler 14 to describe the proton ordering in water ice (ice I h ). In the hexagonal I h , the lowest energy configuration is characterized by two protons near an oxygen ion in a tetrahedron and two away from it similar to the spin ice systems. To fulfil these rules, there are six equivalent atomic configuration at each tetrahedron and Pauling showed that this degeneracy was at the origin of the residual entropy of water at low temperature 15 . However, the ice rules can be locally violated 16 due to the presence of disorder or fluctuations in the system, giving rise to mobile excitations that mimic the behaviour of magnetic monopoles and Dirac strings [17][18][19][20] . Investigating the governing rule in frustrated systems is a key issue not only for understanding exotic phases in magnetism but also for providing guidelines to engineer new magnetic memory and data processing devices 21 . However, experimental investigations of bulk spin ice materials have often been restricted to averaged quantities, such as heat capacity curves 22 , magnetic susceptibility 23 or neutron scattering data 24 . Artificial spin ice has recently been introduced as an alternative system that displays ice-like behaviour and allows for the direct visualization of the individual spins 25 . Such systems are composed of lithographically fabricated ferromagnetic nanostructures 25,26 or nanowires 27 arranged into periodic lattices that generate frustration by design. Given the experimentally accessible length-scale of a few nanometres, the spin orientation and the system GS can be visualized by using magnetic force microscopy, although monitoring the dynamics leading to the system degeneracy remains an elusive task because of the extremely fast spin flipping process. Here we engineer a mesoscopic artificial spin ice that consists of an ensemble of interacting colloidal particles confined to lattices of gravitational traps. Our experimental system is inspired by a recent theoretical proposition 28 , in which electrically charged colloids in a square lattice of bistable optical traps were observed to obey ice-rule ordering for strong electrostatic interactions. The high demand in laser power required to generate the necessary optical traps combined with the difficulty of tuning the surface charge in colloidal systems motivates the use of an alternative approach. We overcome these problems by using interacting microscopic magnetic particles confined to the lattices of bistable gravitational traps. The colloidal spin ice allows us to probe the equilibrium states and the dynamics of pre-designed frustrated lattices, and provides guidelines to engineer novel magnetic storage devices based on frustrated spin states. Results Realization of the colloidal spin ice. We use paramagnetic particles with tunable interactions inside lithographically sculptured double-well traps, as shown in Fig. 1a. Each trap is fabricated by etching an elliptical indentation in a photocurable resin and leaving a small hill in the middle. We arrange these bistable traps into honeycomb or square lattices, although different lattice conformations can be easily implemented by lithographic design. The elliptical wells have an average length of 21 mm, width of 11 mm and we use a lattice constant of a ¼ 33 mm for the honeycomb lattice and a ¼ 44 mm for the square lattice. Paramagnetic microspheres of diameter d ¼ 10.4 mm are dispersed in water and then allowed to sediment above the surface of the resin. Later, the particles are placed in the traps at a one-to-one filling ratio using optical tweezers, (see Methods section). Within the double wells, the colloidal particles are gravitationally trapped in one of the two low-energy states. A typical profile of the double wells obtained via an optical profilometer is shown in Fig. 1b. With this technique we measure an average barrier height within the well of h ¼ 0.43±0.04 mm, where Fig. 1c shows a typical well of B0.3 mm height. Given that the density mismatch is Dr ¼ 0.9 g cm À 3 between the particles and the suspending medium, we estimate a gravitational energy in the centre of the bistable trap of U g ¼ 540k B T, and an outer confining potential for each island of B3,000k B T, where k B is the Boltzmann constant and T ¼ 20°C is the experimental temperature. Thermal fluctuations are unable to induce spontaneous switching of the particle state unless either smaller particles or a smaller hill are used. To tune the pair interaction between the magnetic colloids, the paramagnetic particles are doped with nanoscale iron oxide grains; as a result of doping, these particles are responsive to a magnetic field B. Under the applied field, the particles acquire a dipole moment m ¼ VwB/m 0 , where V ¼ (pd 3 /6) is the particle volume, w ¼ 0.1 the magnetic volume susceptibility and m 0 ¼ 4 p10 À 7 H m the susceptibility of vacuum. Pairs of particles interact via dipolar forces, and for a magnetic field applied perpendicular to the plane, the interaction potential is isotropic and is given by where r ij ¼ |r i À r j |, r i is the position of particle i. We apply a homogeneous field ranging from 0 to 25 mT with an accuracy of 0.1 mT. When the paramagnetic colloids cross the central hill, we find that the corresponding out-of-plane motion produces a negligible effect on the overall collective dynamics. Spin configuration and vertex energy. Two typical experimental realizations are shown in Fig. 1d,e for honeycomb and square lattices, respectively, both following exposure to a constant field of amplitude B ¼ 18 mT for 60 s. Each experiment is initialized by loading one particle in each well using optical tweezers, and randomly flipping the position within the well according to a random number generator. The repulsive magnetic interactions force the particles to maximize their distance and are such that the particles can easily switch state but cannot escape from the gravitational trap. By assigning a vector (analogous to a spin) to each bistable trap pointing from the vacant site towards the side occupied by the particle, one can construct a set of ice rules for the colloidal spin ice system, equivalent to those for artificial spin ice 28 . At each vertex where three traps (in the case of honeycomb) or four traps (in the case of square) meet, the vector assigned to each trap can point either in when the colloidal particle is close to the vertex or out when it is far from the vertex, following the same classification scheme used in the three-dimensional pyrochlore tetrahedron. The vertices of the honeycomb lattice, sometimes called kagome ice since the spin midpoints are arranged in a kagome lattice, can have four different types of spin arrangements. The highest energy configuration occurs when three particles are close to the vertex (K IV ), and the lowest energy configuration has three particles far from the vertex (K I ). In contrast, the square lattice has six types of spin configurations: the highest energy vertex is composed of four particles close to each other (S VI ), and the lowest energy vertex has all the particles far away (S I ). The corresponding energetic weight of all the vertices in both lattices is shown at the bottom of Fig. 1. In particular, for the elliptical wells used in Fig. 1, the magnetic interaction between nearest neighbours can vary from These interactions increase as more particles are added at each vertex. Moreover, we notice that in contrast to the artificial spin ice, the colloidal system is characterized by mobile particles and their pair interaction depends on the relative distance between them. The energy at each vertex thus changes as the particles move, and the corresponding GS results from a collective effect between the interacting units. Ice-selection rules in the colloidal spin ice. Systematic experiments performed by increasing the interaction strength via the applied field reveal that the colloidal spin ice has a clear tendency to follow ice-selection rules for both the square and honeycomb configurations. Figure 2a,b shows that the fraction of low-energy vertices of type S III and K II increases up to B0.8. We confirm both trends by performing Brownian dynamic simulations, shown as continuous lines in Fig. 2a,b. In these simulations, we neglect many-body effects due to the relatively large separation between the interacting particles at each vertex, more details can be found in the Methods. In the ferromagnetic artificial square ice, Wang et al. 25 found that as the interaction increases, the system is dominated by vertices of types S III and S IV . In contrast, we observe that when the applied field increases, the S III vertices dominate over the S IV vertices due to their slightly lower energetic weight. This is closer to the true GS of the square ice, which corresponds to a lattice that is fully covered by S III vertices. To obtain the GS in the magnetic bar system, Zhang et al. 26 recently used a dedicated annealing procedure based on a rotating demagnetizing field, while for the colloidal spin ice system a long-range ordered GS arises by simply increasing the interactions strength between the magnetic particles. In the case of the honeycomb lattice, a different set of ice rules arises, in which the high-energy K IV vertices and their topologically connected K I disappear in favour of the K II and K III vertices. In NATURE COMMUNICATIONS | DOI: 10.1038/ncomms10575 ARTICLE the colloidal system, beyond a field of B ¼ 9 mT, the K II vertices begin to prevail, because they are energetically more favourable and maximize the average particle distance. In reality, at much higher strength, the number of observed K II and K III vertices should converge due to the isotropy of the honeycomb lattice vertices compared with the square one. Particle dynamics above the square and honeycomb lattices. One advantage of our mesoscopic system is that by using particle tracking routines, we can directly follow the colloid displacement and monitor the entire ordering process for a given interaction strength. The dynamics are better visualized by displaying the net topological charge q at each vertex, calculated according to the dumbbell model, in which each spin carries a pair of opposite charges at each of its ends 25 . Colour maps of the charge in the system at different times are shown in Fig. 2c,d for the square and honeycomb lattices, respectively. The GS for the square ice corresponds to vertices with a zero net charge. The highly frustrated honeycomb ice shows how low-energy vertices with net charge q ¼ ±1 are preferred over high-energy vertices with q ¼ ± 3. Starting from an initial random distribution of particles within the traps, the square ice shifts towards a state without highly charged defects. However, some defects do remain frozen close to the sample edges, and the system converges to a lowenergy metastable state. In contrast to the square ice, the honeycomb spin ice has two equivalent vertices and thus an inherent extensive degeneracy. It has been predicted to undergo a series of phases (Ising paramagnet, Ice I, Ice II and 'solid' ice) as the temperature decreases 27 . In Fig. 2d, we observe that the system organizes into a superlattice region of þ 1 and À 1 charged vertices when a strong magnetic field is applied. This is more similar to the Ice II phase with the presence of few defects located at the edges which break the long-range order present in the solid ice phase. We note that the order observed in both types of lattices can be further improved either by applying annealing protocols with dynamic fields or by applying a bias force obtained by a strong magnetic gradient 29 . Discussion Our system allows us to manipulate individual particles within the wells using optical tweezers. This method can be used to introduce defects, which can be later erased by turning on the magnetic field and thus increasing the interaction between the particles. We demonstrate this feature with the square lattice, although we have the same degree of control over the honeycomb ice. For the colloidal square ice, the spin directions in the GS (Fig. 3c) define chiral cells, which alternate in chirality in a checkerboard pattern. Pairs of defects emerge in the form of achiral cells with a net excess of magnetic charge when flipping one spin from the GS configuration. Depending on the location of these cells within the array, defects can be made stable or unstable at a given applied field. In Fig. 3a-c, we observe the evolution of a pair of defects with opposite charges, separated by a string of achiral domains indicated by the green crosses in Fig. 3a. When the annihilation of these defects involves only a few spin flips, the energetic cost for the system to recover its GS is low, and the defects rapidly disappear as the field is turned on, as shown in the sequence in Fig. 3b. The defect annihilation process occurs via a stepwise flipping of the particle position rather than via a cooperative shift of all spins simultaneously, as seen in Fig. 3f. In contrast, in Fig. 3d,e, where we show a string of defects that acts as a domain wall separating two GS regions with unmatched chirality, defective vertices are more difficult to erase because they require an entire region to flip to escape this metastable state. As a result, the domain wall remains practically frozen in place. The presence of large GS regions separated by domain walls emerges as a natural low-energy metastable state in ferromagnetic spin ice 19 , even after a subsequent thermally induced annealing process 30 , given the elusive nature of the true long-range ordered GS. The stability of domain walls between regions of different chirality suggests that one possible mechanism of information storage in the square system can be achieved by arranging the particles in the vertices in such a way as to maximize the number of spin flips required to reach the GS. For example, the triangular pattern of achiral cells shown in Fig. 4a is formed by flipping three of the four spins in a cell. The defects disappear by applying a field of 25 mT, which causes the three particles to shift consecutively, in a manner similar to that presented in Fig. 3b. In contrast, if we set a counterclockwise chiral cell in place of a clockwise chiral cell, surrounded by four achiral cells, fixing the defects requires a simultaneous four-spin reversal because each individual switch leads to a higher energy state. This simultaneous flipping is an energetically higher erasure process, as shown in Fig. 3c. Beyond 25 mT, the pair interactions can become stronger than the lateral confinement, and the colloids have been observed to escape from their gravitational trap in such a way that they rearrange into a triangular lattice. For this reason, we use numerical simulation to determine the threshold field B c necessary to reset the GS, (details in the Methods). We confirm that the GS is reached by inverting the chirality of the central cell at a field of B c ¼ 30 mT. Cell writing with defined chirality can be used to store digital information in the form of 8 bit ASCII characters. Once written with optical tweezers, the chiral domains are stable below B c . In addition, any low-energy defect, such as those shown in Fig. 4a, easily disappears. More examples of stable and unstable achiral cells are shown in Supplementary Figs 1 and 2, and commented in Supplementary Note 1. In conclusion, we have engineered artificial colloidal spin ice states in which an external magnetic field fully controls the spin interactions and the collective dynamics leading to the degenerate GS of the system. Unlike the original proposition of a completely optical system 28 , our bistable gravitational traps are sculptured in soft-lithographic platforms, can be arranged into periodic lattices and can be designed with diverse geometries and lattice constants. Our geometrically frustrated soft matter system provides a robust approach for probing the effect of disorder by manually introducing defects into the lattice pattern. Disorder in the system can be also created by either leaving traps empty or, as recently proposed 31 , by creating sites of double occupation which correspond to pairs of outward pointing spins, not possible with the nanoscale spin ice system. Finally, the strategy presented here can offer guidelines for designing similar experiments on nanoscopic systems or probing the stability of spin arrangements to record information in magnetic data storage devices 32 . Methods Fabrication of the soft-lithographic platform. The pattern was written via direct write laser lithography (DWL 66, Heidelberg Instruments Mikrotechnik GmbH) on a 5-inch Cr mask with a l ¼ 405 nm diode laser at a 5.7 mm 2 min À 1 writing speed. As shown in Supplementary Figs 3 and 4, the small hill in the centre of each trap was obtained by drawing a small constriction in the middle of the elliptical well. These structures were exposed on a B100-mm-thick coverglass by a 2.8-mmthick layer of a positive photoresist AZ-1512HS (Microchem, Newton, MA) deposited by spin coating (Spinner Ws-650Sz, Laurell) performed at 1,000 r.p.m. for 30 s. After the deposition, the photoresist was irradiated for 3.5 s with ultraviolet light at a power of 21 mW cm À 2 (UV-NIL, SUSS Microtech). Later the exposed regions were eliminated by submerging the plate in AZ726MF developer solution for 7 s. Some representative images of the resulting structures are shown in the Supplementary Figs 3-5. The substrate was finally covered with a thin layer of polysodium 4-styrene sulfonate by using the layer-by-layer adsorption technique. More details are given in the Supplementary Note 2. Magneto-optical set-up and experiments. The experimental set-up allows applying simultaneously and independently magnetic and optical forces. It is composed by an inverted homemade optical microscope equipped with a white light illumination LED (MCWHL5 from Thorlabs), a charge-coupled device (Basler A311f) and custom-made coil perpendicular to the sample cell such that the main axis points along the z-direction. The coil was connected to a programmable power supply (KEPCO BOP-20 10M), which is remotely controlled along with the image acquisition and recording with a custom-made LabVIEW programme. The photoresist is sensitive to ultraviolet light, so the white light of the LED is filtered with a long pass filter with a cutoff at 500 nm (FEL0500 Thorlabs). Optical tweezers are realized by tightly focusing a l ¼ 975 nm, P ¼ 330 mW, Butterfly Laser Diode (Thorlabs) with a 100 Â Achromatic microscope objective (Nikon, numerical aperture ¼ 1.2) which is also used for observation purpose. During the experiments, a solution is prepared with 3.5 ml of polystyrene paramagnetic particles (PS-MAG-S2874Microparticles GmbH) with 10 ml of high-deionized water (MilliQ system, Millipore). A drop is placed on the soft lithography structures and after few minutes, the particles sediment due to density mismatch until they are suspended above the substrate due to the electrostatic repulsive interactions with the negative charged surface. We use the optical tweezers to fill all the bistable traps with exactly one particle, removing excess or aggregated colloids. After the initial setting, the particles are allowed equilibrate in their wells for B2 min before applying the external field. We place a total of 84 particles in the square lattice and 64 in the honeycomb lattice in an experimental field of view of 325 Â 222 mm 2 . The effect of the boundary on the experimental system is discussed in Supplementary Note 3 and shown in Supplementary Fig. 6. Brownian dynamics simulation. We perform two-dimensional Brownian dynamics with periodic boundary conditions containing N particles arranged into an ensemble of double-well traps. A particle i at positionr i x i ; y i ð Þobeys the set of overdamped equations: where Z is the viscous friction andF tot is the sum of external forces acting on the particle, composed by three termsF ext ¼F g þF N þF M . HereF g is the gravitational force,F N the normal force exerted by double-well confinement andF M the magnetic interaction between particles. The gravitational force is given byF g ¼ gVDr, where g is the gravitational acceleration, V is the volume of the particles and Dr is the density mismatch. We assume the following shape for the double-well potential, where dr the displacement vector from the centre of the trap, d the distance between the two stable positions, h the height of the central hill and k is the transverse width of the trap. The two unit vectorsê k andê ? define the orientation of the trap;ê k is the vector that joins the two stable positions andê ? is the transverse axis. The corresponding trap is shown in Supplementary Fig. 7a. Assuming the walls have a small inclination angle, the normal force can be calculated asF N ¼F g r t z; which gives: where dr k ¼ dr Áê k and dr ? ¼ dr Áê ? : The magnetic interaction is calculated assuming every particle has a magnetic momentm ¼B j jwV m 0ẑ . HereB is the amplitude of the magnetic field, w is the magnetic volume susceptibility, m 0 is the permeability of the medium and V is the particle volume. The total magnetic force exerted on one particle is then given by:F Mi ¼ P j 3m 0 2p rij j j 4m j j 2r ij wherer ij is the vector that goes from particle i to particle j. Finally x(t) in Equation (1) is a Gaussian white noise with zero mean, and a correlation function: hx(t)x(t 0 )i ¼ 2Zk B Td(t À t 0 ). We numerically integrate Equation (1) using a finite time step of 0.01 s and substituting experimental parameters for most quantities. However, the height of the central hill h connecting the two circular traps in the double wells is modified to match the experimental data. Small discrepancies can arise in the hill elevations in photolithographic platforms realized during different fabrication process. First, to validate our theoretical model, we perform initial simulation to match the displacement observed between isolated pair of particles placed within two double well-oriented in a square and honeycomb lattice. The good comparison between experimental data (scattered points) and simulation results (continuous lines) is shown in Supplementary Fig. 7b. The step-like behaviour of the pair distance observed for the blue and green curves is due to the barrier overcoming of one particle. This process is energetically more expensive for isolated particles, reflecting that in the colloidal spin ice system the particle arrangement is a true collective effect rather than resulting from local energy minimization.
2016-05-04T20:20:58.661Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "3113e48f7a979ffdbff0177b514645512b4ae1b8", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/ncomms10575.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3113e48f7a979ffdbff0177b514645512b4ae1b8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
52890925
pes2o/s2orc
v3-fos-license
The Case for MUSIC: A Programmable IoT Framework for Mobile Urban Sensing Applications This vision paper presents the case for MUSIC, a programmable framework for building distributed mobile IoT applications for urban sensing. The Mobile Urban Sensing, Inference and Control (MUSIC) framework is contextualized for scenarios where a distributed collection of static or mobile sensors collectively achieve an urban sensing task. The MUSIC platform is designed for urban-centric sensing applications such as location sensing on mobile phones for road traffic monitoring, air quality sensing and urban quality monitoring using remote cameras. This programmable system, at a high level, consists of several small sensors placed throughout a city on mobile vehicles and a centralized controller that makes decisions on sensing in order to achieve certain well-defined objectives such as improving spatial coverage of sensing and detection of hotspots. The system is programmable in that our framework allows one to create custom smart systems by writing custom control logic for sensing. Our contributions are two-fold -- a backend software stack to enable centralized control of distributed devices and programmability, and algorithms for intelligent control in the presence of practical power and network constraints. We briefly present three different urban sensing applications built on top of the MUSIC stack. Introduction The vision of smart cities is being largely powered by a broad array of new wireless telemetry applications for addressing urban challenges. Urban-centric wireless telemetry applications such as air quality sensing [3,5,17,4], road quality monitoring [7], road traffic delay estimation [20] and fleet tracking [1] rely on a large number of mobile IoT sensors that are controlled by a cloud controller to collectively achieve a distributed urban sensing task. Similar telemetry applications in other contexts include: smart agriculture [12,8,2,22], smart water networks [11], wildlife monitoring [13], and many others [23]. For many of these applications, especially those in that are applicable in urban contexts, fine-grained sensing is essential thereby placing a minimum need on the number of sensors. For example, a city wide deployment of a fine-grained air quality monitoring system may require 500-1000 sensors. 1 Most urban sensing applications may also require a high frequency of sensing (such as 1 Hz or higher) especially at certain times such as when the quantity being measured is changing rapidly at real time. Applications such as pothole detection and monitoring using accelerometer tracking [7] or location tracking using cellular network signals [21] may require even higher sampling rates. To manage a large scale of mobile IoT sensors for these applications in an efficient manner, we need smart sensing and control policies to control the granularity of sensing, the granularity of data collection and the granularity of computations. These mobile IoT applications also have a significant impact on the cellular network footprint and the power consumption by the tiny mobile devices. According to [18], the amount of useful data generated by IoT devices in 2021 will be about 85 ZB, but only 7.2 ZB of that will actually be stored or used. Our experiments with air quality monitoring for this work have also shown that it costs approx 30 MB of data per sensor per day in order to obtain fine-grained air quality data at the rate of 1 Hz. For a city-wide deployment, this amounts to at least 15 GB of data per day (which also incurs non-trivial network data costs with a yearly opex more than the capex of deployment). Power consumption by mobile devices are also a problem, which could be reduced if the sensors were turned off when not needed or if the sensors generated only the amount of data that is actually needed. This paper describes the design of the Mobile Urban Sensing Inference and Control (MUSIC) stack that aims to address many of these challenges outlined above. The MUSIC platform is designed to enable easy development of cloud controlled distributed mobile sensing applications. In MUSIC, a cloud controller can control a distributed collection of mobile devices and sensors using a type of sensing policy and objective that can be easily specified. In this paper, we describe the MUSIC system, consisting of the stack built on top of conventional networking stacks, and show how it can be used in providing flexibility and programmability in three real world sensing applications -spatial coverage mapping for air quality sensing, hotspot detection for road traffic analysis and dirt detection for urban cleanliness monitoring. The MUSIC platform is flexible to work with a diverse array of sensors that can connect to a mobile device, possibly from multiple vendors, and programmable to support custom sensing strategies given an objective sensing function and constraints. The MUSIC platform can deal with three constraints: (a) power-awareness of edge devices; (b) costawareness due to network costs that may vary by country and application needs; (c) network awareness to make the application resilient in the face of intermittent connectivity and lack of reliable sensing data. In this vision paper, we specifically aim to demonstrate the case for a MUSIC stack and demonstrate the utility of the stack using real world urban sensing applications. We have early experiences tailoring the MUSIC platform for three applications: air quality monitoring in Delhi, traffic hotspot detection in New York City and road cleanliness detection in Delhi. All these applications have been built using a cloud service and an Android mobile application which can interconnect with in-built, fixed or bluetooth sensors. Related Work There have been a broad array of works on mobile IoT applications, urban sensing applications, wireless telemetry and specialized IoT applications. However, we are unaware of any generic and programmable platform that can support several distributed sensing applications with flexible sensing policies and programmable objective functions and constraints. We outline some of the key related works. Smart sensing for specialized applications: In the field of agriculture, there have been works on making irrigation more smart in order to reduce wastage of water. In [12] and [8], the authors implement WSNs for remote sprinklers, along with sensing and control for variable rate irrigation. [22] describes a full-fledged system for smart irrigation where the backend server takes real-time moisture data as input to determine sensing decisions. In the context of urban air quality monitoring, given increasing concern over poor air quality in many parts of the world, there has been a recent surge of commercially available low-cost portable sensors by various vendors [9,14,19]. The drawback is that each vendor has its own frontend and backend system that is usually compatible only with its own products. This becomes a challenge when conducting large-scale urban air quality monitoring because it restricts freedom in choosing devices based on its specifications. Likewise, there have also been numerous experimental works on mobile urban air quality monitoring [6,4,3,5,17]. While all of them contain real-time reporting in some form, we have not found evidence for programmability and extensibility for implementing custom policies. Similar Architectures: Mobile Fog [10] is a system developed for IoT applications such as vehicle tracking using cameras and traffic monitoring MCEP. While they semonstrate the ability to perform analytics at reduced latency and network bandwidth, there is no specific focus on enabling a flexible programmable stack across applications. Ravel [16] is a system designed to aid ease of development of IoT applications but using the traditional MVC architecture, which calls for user input for control. The MUSIC Programmable Stack Whereas most smart sensing systems today have available hardcoded policies in them, we envision the ability to program custom policies in the MUSIC platform. The policies would serve primarily to control dynamically the amount of data that is being collected by the sensors on the ground via three means -turning specific sensors off when not needed, varying the sensing or reporting frequency and suggest changes in the spatial position of the sensor. This type of control requires a centralized "controller" that monitors the data from all the sensors and provides continuous feedback. The MUSIC platform is a modular layered stack built over regular communication channels to enable the intelli-gent and ML-based control. Figure 1 illustrates the stack. The highest layer is responsible for data analytics and visualization based on collected sensing data. The vision of the ML control layer is to support different machine learning and standard optimization algorithms to help in determining the best policies to order to minimize our network, cost and power constraints, translates the policies into commands. The command control layer translates these commands to actionable commands to individual devices over existing communication data channels which is maintained by the data stream layer. MUSIC is an end-to-end system that consists of a mobile phone application and a backend server-cumcontroller. The frontend mobile app is the gateway for sensors to report data. Sensors are either built-in ones in the phone (e.g. accelerometer, GPS or compass) or are external ones that communicate over Bluetooth (e.g. air quality monitors). We emphasize that the gateway does not need to be a mobile phone, but could be a standalone sensor with wireless connection capabilities and general purpose computing hardware. The backend serves to provide access to the data as well as implement control by sending "commands" to the mobile phones, such as START and STOP, based on sensing decisions. The mobile app is adapted from an existing open source project [9] and we used it for air quality monitoring as well as collecting GPS traces, camera and accelerometer readings for traffic and road cleanliness applications. Next, we outline the key properties of many of these layers. Data Layer Built over the transport layer, either TCP or UDP, this layer sets the format for data that is sent from the IoT devices to the cloud. There is a master Driver thread, that has a server-side TCP socket opened for incoming connection requests from edge nodes. A new thread is spawned for each new edge node that is connected, called the phone thread, in order to receive data from the node as well as to send control commands. There are broadly two types of messages sent by the edge nodes -a KeepAlive message and a data message. The former is a periodic ping-style message that is aimed at informing the backend about the device's presence as well as the changing IP address of the mobile edge. It contains fields such as time stamp, latitude, longitude and battery life. The full set of fields in a KeepAlive message is shown in listing 1. The second type of message is a data message, which contains data collected by the sensors. The exact set fields in this message depends on the sensor data that is reported. We show two listings below (2 and 3), for camera image data and other sensor data respectively. Sampling Layer for Control Communication We refer to this as the "sampling" or "control" layer. As figure 1 shows, the sampling layer is built over a transport layer protocol such as TCP. In our implementation thus far, we have used TCP, even though it can be implemented equally over UDP as well. Mobile devices keep separate TCP connections to the cloud for this purpose for the cloud to send commands to the device, such as START and STOP. These commands perform the simple tasks as the names indicate -a START command received by the edge (mobile phone) results in the app starting a new recording session. The particular sensors to be started, as well as the frequency at which the sensors are to report data, are provided as arguments to the command. The STOP likewise causes the current recording session to end. The data collected between a START and a STOP is saved temporarily in the phone until a SEND command is received, at which point the data is sent to the server. Whether the data is compressed or not is indicated by an argument to the command. The message is a simple text JSON as shown in listing 4. As the communication happens over a cellular network, the IP address associated with the mobile edge would not remain fixed, and therefore the phone periodically sends ping packets to the server, which serves two purposesi) it helps to know if the edge is alive as the lack of such packets most probably indicate an edge that died or that is malfunctioning, ii) the server is constantly updated about the IP address of the edge node to send commands to. This type of control is akin to the control in softwaredefined networks, where the central controller writes rules in the flow tables in the routers. In our application, mobile phones are the nodes that act as middleboxes and Command Layer This is a software layer that converts requests for data by higher layers into messages for the sampling layer. High level requests for data are specified in the form of policies. Each policy includes a list of locations, the list of sensors and individual sensing frequencies. The Driver thread ( §3) converts these into the command messages described earlier, replaces the previous policy with the new policy and keeps the new policy going until the next policy is received from the higher layer. When a new sensor joins the list, the default policy is to repeat in a cycle -sense for 20 seconds at default frequencies, stop and send all the data. So the command cycle is -START (20 seconds), STOP, SEND. The default frequencies vary depending on the sensor type. For accelerometer, we set the default frequency to be 20 Hz and for air quality sensing, 1 Hz. However, the frequency at which data is reported ultimately depends on the fidelity of the sensor. If the instructed frequency is higher than the maximum frequency at which the sensor can sense, then the sensor will sense at its maximum possible frequency. ML Layer for Control This machine learning (ML) layer for control is the layer that computes application specific policies that need to be implemented based on observations and constraints. The data collection objective usually is to collect as much sensor data as necessary to perform the required analytics and make subsequent decisions. The application layer constraint is typically to ensure a base sampling requirement to achieve the urban sensing objective. The lower-layer constraints in the problem are three-fold: network, cost and power. This layer constantly accesses the data that is received from the edge nodes, runs analytics on them, such as field estimation, forecasting, hotspot detection, etc. and outputs policies, after attempting to satisfy the constraints. As an example, if the input from the ground are GPS traces from several mobile phones placed in cars, and the objective is to detect as many traffic hotspots (road links with high congestion) as possible, then this layer would first map the location reports into road traffic segments/links, then estimate average speeds in those segments based on the movements of the mobile phones, and then estimate future average speeds based on current and historical trends. The road segments with very low forecasted average speeds are potential hotspots. Following this forecasting procedure, a new policy would be generated, such as to increase frequency of sensing in those segments of interest and stop sensing from those segments that exhibit close to free-flow traffic. Applications and Experiences In this section, we outline three specific applications that support different types of commands and policies generated based on sensing objectives and the constraints. We describe three example objectives here in order to present our case for the MUSIC platform: (a) spatial coverage, for air quality sensing; (b) hotspot detection, for traffic congestion inference; (c) object recognition algorithms, for cleanliness monitoring. Spatial Coverage Mapping of Air Quality in Delhi We have deployed a air quality monitoring platform in Delhi comprising a few sensors (highly polluted city) built on top of the MUSIC platform. When collecting data such as air quality or GPS locations from buses on the roads, our spatial coverage objective is to ensure that the sensors report data is a "coordinated" and intelligent manner so as to avoid excessive data reporting and power consumption. In our formulation, the spatial coverage map-ping function takes as input sensor readings from multiple locations over a period of time and computes a mean field for pollution in a locality using sensor interpolation techniques. The problem, then, at every regular interval, is to make a decision for each sensor on whether or not it should sense and at what frequency, so that the constructed field has as low as error as possible while satisfying both the network and power constraints. Using the MUSIC system, this would be achieved by implementing such a function at the ML Layer. The error computed at the ML Layer would then be translated into actual commands to be sent (such as increase sampling frequency) at the Control Layer. Then the commands would be prepared by the Sampling layer, conforming to the lower-level message format expected by the Communication Layer. A simpler implementation of the spatial coverage mapping function and control for air quality would work as follows. A threshold distance for good separation between two sensors is about 0.5 km, in the case of air quality sensors. This is based on real world sensor placements using our low-cost air quality monitors in Delhi. An example control logic would work simply as follows -monitor pairwise distances of sensors on the ground, and if any pair is closer than 0.5 km, send a STOP command to one of them to stop sensing. Once it crosses the 0.5 km boundary, send a START command again. This can be done easily by maintaining state in the backend. Which sensor we choose to send the command to depends on the current state of that sensor, including their battery levels. We used this system for real world air quality sensing on roads in the city of Delhi, India. We placed Airbeam air quality sensors [9] in a small region in the city of Delhi. Each sensor is hooked up to a mobile phone that runs our smart sensing app. The default sensing setting is to sense at the maximum frequency possible by the air quality sensor (1 Hz) for about 20 seconds, and then break for another 10 seconds and then restart the cycle. So the command cycle is -START (20 seconds), STOP, SEND. The server waits till all the data is received before sending then next START command. This cycle was able to capture data sufficiently well. Figure 2a shows the app in action. We placed 5 such sensors sensors in 5 different locations in a small region in South Delhi in India, as shown in figure 2b. We aim to extend this to a larger scale deployment. Road Traffic Hotspot Detection in New York City Hotspots are interesting because they point to highly localized points of unusual activity, which may be a source for a larger problem. For instance, a highly localized activity such as a road accident or construction work on a particular road segment would exhibit ripple effects that In our formulation, the hotspot detection function is a function that takes as input recent historical readings from an array of sensors and outputs a boolean array, indicating whether or not each sensor is located in a hotspot. Assuming that no two sensors in a neighborhood are co-located, we define a hotspot location as one that exhibits an unusually higher or lower average reading than the others. In the MUSIC system, just as in the case of the spatial coverage function, the hotspot detection function would be implemented in the ML Layer. When any hotspots are detected (i.e. if there are 1s in the boolean array output), then a request for higher frequency data is sent. This is then translated into appropriate commands by the Control Layer. In an application to monitor road traffic congestion, GPS readings are recorded from mobile vehicles. With an array of location traces (as (lat,lon) coordinates) along with timestamps from several mobile vehicles over a period of time, average vehicular traversal speeds can be computed for each road segment or link. A traffic hotspot may be defined arbitrarily, but let us say we call it a road segment a hotspot if there is a sudden drop in vehicle speed in that segment over a sustained period of time. A simple policy to maximize likelihood of detecting hotspots while being network-and power-aware, then, is to keep sensing at a certain frequency to begin with, reduce it as the speed on the segment approaches the free-flow traffic speed, and increase frequency of sensing whenever there is a dip in the speed. If the frequency drops below a certain threshold, then the camera may be activated in order to capture images so that the situation may be assessed more accurately. We have tested a version of our MUSIC platform for traffic congestion detection with a small number of mobile vehicles. To demonstrate the potential of this application, we have worked on congestion and hotspot detection using open GPS traces in New York City, using city (b) Prediction of average link speeds at future times bus mobility trace data from the NYC MTA [15]. The data was available for three months for all the buses in the city. One can imagine a setting where all these buses are supported and controlled by a MUSIC traffic application. To detect congestion hotspots, we adopted an approach as follows -divide roads into segments defined by the portion between two consecutive bus stops on any bus route, determine average bus traverse speed in every segment in every 10 minute interval for 3 months and use the first two-thirds of the resulting timeseries data to train a predictive model to forecast average speeds in neighboring segments. We built a graph neural network to achieve the predictive task. Figure 3b shows the speed trend on a single day on a single segment, with the dotted line showing our model prediction. Figure 3a illustrates some sample segments from the dataset. We have obtained clearance from a transportation board to perform a larger scale rollout of the MUSIC traffic application in a large city in a developing country. Urban Cleanliness Monitoring in Delhi Urban spaces (both indoor and outdoor) in many large cities are dirty. The dirt detection application aims to demonstrate how the camera of a distributed collection of mobile phones can be used as a sensor for dirt detection in conjunction with image processing algorithms at the back-end to detect "dirty items on the road". We developed a application to detect dirt, dust and indications of lack of cleanliness in urban spaces. Using deep neural networks, a prototype was implemented successfully to detect dirt patches in hospital rooms in the city of Delhi, India. We are currently working on extending this work for wider cases such as roadscape photos. Figure 4 shows an image taken with our application, triggered by the CAPTURE IMAGE command, with the blue circle showing the manual annotation for training the system. With our understanding that such algorithms can be developed feasibly, these algorithms can then be integrated into our backend so that they be used for making decisions on sensing in the backend. In the same example, if more than a threshold amount of dirt is detected, then sensing of other sensors can be started, such as PM concentration or dust concentration or humidity and so on. This application is to primarily demonstrate an alternative setting where the sampling frequency can be controlled based on the output of the cleanliness detection algorithms to determine the areas that require to be sampled more than others. This application has been tested in indoor and outdoor contexts in Delhi and we plan to deploy it at scale in the future. Conclusion In this paper, we have presented MUSIC, a programmable end-to-end platform for various urban sensing and telemetry applications.We presented the MUSIC stack in detail that enables us to implement the intelligence in the system via centralized control and determining application specific sensing policies to meet specific urban sensing objectives subject to network, power and cost constraints. We showed early experiences of the system for three applications -spatial coverage mapping for urban air quality, road traffic congestion detection and dirt detection in urban spaces.
2018-09-28T16:09:32.000Z
2018-09-28T00:00:00.000
{ "year": 2018, "sha1": "f2453bdf18311327b5d44a48013d16b7ee5ef4a7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f2453bdf18311327b5d44a48013d16b7ee5ef4a7", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
265104038
pes2o/s2orc
v3-fos-license
Sulfated oligosaccharide activates endothelial Notch for inducing macrophage-associated arteriogenesis to treat ischemic diseases Significance Rapid establishment of collateral arterial circulation is crucial for the treatment of ischemic diseases. However, conventional pro-angiogenic drugs mostly suffer from abnormal angiogenesis and potential cancer risk, and currently, no off-the-shelf biomaterials can efficiently induce angiogenesis. In this study, we show that sulfated oligosaccharides can regulate the polarization of host Mφs and differentiate into perivascular cells, thereby effectively inducing in situ arterial regeneration. Sulfated oligosaccharides activate Notch signaling to facilitate arterial regeneration and development through communication between perivascular Mφs and arterial endothelial cells. We expect that these findings may pave the way for the development of biomaterial-based therapeutic arteriogenesis to treat ischemic diseases. Ischemic diseases lead to considerable morbidity and mortality, yet conventional clinical treatment strategies for therapeutic angiogenesis fall short of being impactful.Despite the potential of biomaterials to deliver pro-angiogenic molecules at the infarct site to induce angiogenesis, their efficacy has been impeded by aberrant vascular activation and off-target circulation.Here, we present a semisynthetic low-molecular sulfated chitosan oligosaccharide (SCOS) that efficiently induces therapeutic arteriogenesis with a spontaneous generation of collateral circulation and blood reperfusion in rodent models of hind limb ischemia and myocardial infarction.SCOS elicits anti-inflammatory macrophages' (Mφs') differentiation into perivascular Mφs, which in turn directs artery formation via a cell-to-cell communication rather than secretory factor regulation.SCOS-mediated arteriogenesis requires a canonical Notch signaling pathway in Mφs via the glycosylation of protein O-glucosyltransferases 2, which results in promoting arterial differentiation and tissue repair in ischemia.Thus, this highly bioactive oligosaccharide can be harnessed to direct efficiently therapeutic arteriogenesis and perfusion for the treatment of ischemic diseases. ischemic diseases | in situ regeneration | therapeutic arteriogenesis | sulfated oligosaccharide | perivascular macrophage Ischemic vascular diseases, such as coronary artery disease, cerebral ischemia, and periph eral arterial disease, have long been one of the most severe diseases with high morbidities and mortalities (1).Therapeutic strategies for inducing arteriogenesis and collateral recon struction, restoration of the blood supply, and alleviation of the ischemic injury are of crucial importance in ischemic disease interventions (2)(3)(4)(5).There has long been an interest in developing effective treatment of systemic delivery with exogenous pro-angiogenic molecules [e.g., vascular endothelial growth factor (VEGF) and fibroblast growth factor] to rapidly reconstruct the functional vasculature network (6,7).However, this growth factor-based therapeutics is still far from satisfactory.Notably, high doses of angiogenic agents are commonly used for efficiently inducing therapeutic angiogenesis owing to their short half-life and rapid elimination at the ischemic site, which inescapably disrupts the homeostasis balance of local microenvironment with an increased risk of oedema and tumor formation (8).Moreover, the existing therapeutic strategies mainly focus on induc ing highly branched capillary beds (9,10), and there is still a lack of therapeutic modalities for directing the formation of functional arteries to meet the demands of sufficient circu lation in the ischemic diseases. The ischemic diseases are invariably associated with the infiltration of inflammatory cells, among which macrophages (Mφs) exhibit a pivotal role in immunomodulation and directly contribute to arteriogenesis through secretion of a multitude of pro-angiogenic cytokines (11)(12)(13).Additionally, Mφs can also modulate arteriogenesis and tissue repair through intercellular communication (14).Expanding the scope of immunotherapies to encompass ischemic injuries by promoting the therapeutic functions in Mφs, thereby enhancing functional blood perfusion, would undoubtedly be an attractive prospect for a range of acute ischemic diseases.However, the insufficient regulation of Mφs with spontaneous secretion of angiogenic factors results in inadequate neovascularization and hinders complete recovery from ischemia.Additionally, uncontrolled release of inflam matory cytokines can lead to excessive tissue fibrosis and even atherosclerosis, ultimately impeding successful tissue repair.Nowadays, materiobiology-based therapeutic strategy has proven to be auspicious objectives for in situ ischemic treatment, utilizing the phys icochemical properties of biomaterials to accurately regulate the recruiting Mφs that produce a series of metalloproteinases, vasoactive substances, chemokines, and growth factors to rebuild the collateral circulation (15).However, the process of arteriogenesis involves multicellular assembly to form a complex structure with three concentric layers (i.e., endothelial cells, smooth muscle cells, and connective tissues) (16), which imposes higher demands on biomaterials for regulat ing the host microenvironment and inducing mature arteries.Furthermore, the role of biomaterial-regulated Mφs in arterio genesis remains unknown, particularly for implanted biomaterials.Therefore, it is imperative to design proper biomaterials in a way to regulate Mφ behaviors to promote arteriogenesis and reveal its potential regulatory mechanisms. Natural polysaccharides with specific structural features ubiqui tously exist in mammalian tissues, in which sulfated polysaccharides often serve as a reservoir to anchor endogenous growth factors with the unique abilities to regulate the bioactivity of endogenous cytokines to efficiently induce angiogenesis (17,18).Previous studies also have indicated that sulfated polysaccharides possess Mφ-regulatory activ ities by binding and activating specific carbohydrate receptors on the surface of Mφs (19), giving rise to a reduction in pro-inflammatory cytokine secretion and an increase in the release of pro-angiogenic factors, ultimately promoting vascularization (20,21).Actually, pol ysaccharides are susceptible to enzymatic or hydrolytic degradation in vivo (22), resulting in the formation of oligosaccharides that further participate in wound healing.As a degradation product, oligosaccha rides can efficiently regulate the pathological microenvironment and therapeutically alleviate inflammatory responses (23), oxidative stress (24), and hypertension (25) in the treatment of ischemic diseases.Apart from that, oligosaccharides also play pivotal roles in maintaining the arterial wall integrity and regulating cellular behavior within it (26,27).However, the mechanisms by which oligosaccharides interact with host cells and modulate endogenous growth factors to induce in situ arteriogenesis remain largely elusive. In a previous study, we investigated the efficacy of various chitosan-derived polysaccharides in inducing therapeutic angio genesis in ischemia, which suggested that the presence of a sulfated group and saccharide sequence is essential for promoting angio genesis (21).We hypothesize that the properties of oligosaccha rides can also independently influence arteriogenesis by regulating the pathological microenvironment, which we examined two oligosaccharides [chitosan oligosaccharides (COS) and sulfated COS (SCOS)] with distinct electrical charges.Here, we aimed to investigate the role of SCOS in promoting arteriogenesis and enh ancing blood perfusion recovery in ischemic tissues.Additionally, we sought to elucidate the mechanism by which SCOS regulates the in vivo microenvironment and stimulates the formation of blood vessels.Our findings suggest that SCOS regulates perivas cular Mφs to induce arteriogenesis via a canonical Notch signaling pathway.Therefore, by designing bioactive oligosaccharides, we can create an implant that efficiently induces the artery formation and in turn addresses the ischemic diseases. SCOS Promotes Arteriogenesis Following Ischemia Injury in Mice.The synthesized SCOS was subjected to various analytical techniques, including gel permeation chromatography, elemental analysis, ξpotential analysis, NMR, and Fourier transform infrared spectroscopy (FT-IR).A representative NMR and FT-IR graph revealed that the SCOS was successfully modified with sulfated groups (SI Appendix, Fig. S1).Zeta potential measurements revealed that SCOS and COS exhibited a negative charge and a positive charge, respectively, which also indicated that the cationic hydroxyl group on COS was replaced by an anionic sulfated group. To assess the pro-angiogenic activity of SCOS and COS, we first implanted the scaffolds into the ischemic hind limbs (Fig. 1A).Blood perfusion degrees in the ischemic hind limb normalized to the contralateral limb were examined by Laser Speckle imaging with quantitative analysis on days 1, 3, 7, and 14 postoperations (Fig. 1B).The results showed that the impaired hind limb perfusion caused a 50% and 40% postoperative reduction at the ischemic sites and the paw, respectively.In mice implanted with SCOS GS, perfusion measured at the implanted site and paw progressively increased over time, and reach its peak on day 7 after implantation (Fig. 1 C and D).In contrast, an attenuated recovery of perfusion was observed in both Blank GS and COS GS, which meant that collateral circulation was not efficiently reconstructed. To better understand how SCOS GS restored the perfusion, we examined the vasculature of implants retrieved on days 7 and 14.Synchrotron radiation-based microcomputed tomography showed a clear invasion of microvessels from the host artery into the implants (Fig. 1E).The quantitative analysis reveled significantly higher relative blood vessel volume and vascular number in SCOS GS compared to COS GS and Blank GS.Moreover, these param eters exhibited a gradual increase from day 7 to day 14 postimplantation (Fig. 1 F and G).Next, we also assessed the levels of α-SMA-and CD31-positive arteries in the implants retrieved from the ischemic hind limb after implantation for 7 d and 14 d, respec tively.We found that abundant CD31-positive endothelial cells spontaneously organized into vascular networks that were stabilized with α-SMA-positive cells to form typical arteries in SCOS GS on day 7 compared to Blank GS and COS GS (Fig. 1 H and I).The number of arteries involved in COS GS had no significant differ ence compared to the Blank GS.Flow cytometry analysis also confirmed a higher quantity of CD31 hi Sca-1 hi arteriole cells in SCOS GS on day 7 after implantation in the ischemic hind limb, with a slight decrease observed on day 14 (SI Appendix, Fig. S2).Simultaneously, the dissolved free oxygen in the SCOS-treated ischemic muscle tissue was observed to steadily increase over time until it reaches the normal physiological level and maintains equi librium, as demonstrated by a gold-standard oxygen microelectrode (Fig. 1J).Considering the observed decline in arteriole density after 14 d, it is plausible to assert that capillaries also play a crucial role in maintaining oxygen levels within the surrounding tissues during the later stages. Histological examination of hematoxylin and eosin (H&E) staining was conducted to analyze the invasion of new blood vessels and tissues into the implants.The newly formed blood vessels are predominantly located within the fibrous connective tissue that has developed between the implant and muscle space (Fig. 1K).As expected, a higher density of ingrowth microvessels and fibrous tissue was found in SCOS GS compared to Blank GS and COS GS, and an obvious collateral with a thick arterial wall and mature erythrocytes could be observed in SCOS GS on day 7 after surgery (Fig. 1K).Together, these findings suggest that SCOS significantly promotes the formation of arteries and rescues the perfusion in severe ischemia. SCOS Modulates the Mφs Polarization and Inflammatory Response.Once biomaterials were implanted, the myeloid cells gathered and participated in the regulation of tissue repair (28).To examine whether myeloid cells were involved in the process of SCOS-mediated arteriogenesis in ischemia, implants were retrieved on day 7 postoperation and the recruited cells were analyzed for innate immune cell markers using a flow cytometry analysis.Dendritic cells (CD11b + CD11c + ), neutrophils (CD11b + Ly6G + ), and Mφs (CD11b + F4/80 + ) in each group had no significant differences, which meant that the oligosaccharide had no effect on the recruitment of myeloid cells (Fig. 2 A and C and SI Appendix, Fig. S3).As Mφs could subsequently polarize to diverse phenotypes and exhibit different functions in angio genesis, we further analyzed the phenotype of infiltrating Mφs in implants in ischemia.Flow cytometry analysis revealed that both SCOS GS and COS GS significantly attenuated the M1 Mφ (CD197 + CD206 -) polarization, while SCOS GS triggered a higher extent of M2 Mφ (CD197 -CD206 + ) polarization relative to the other groups (Fig. 2 B, D, and E).Immunostainings of the retrieved implants on day 7 also confirmed that more dominant CD206-positive M2 Mφs could be observed in the SCOS GS than in the other groups, contrary to the CD197-positive M1 polarization pattern (Fig. 2F).Additionally, gene-profiling of Mφs isolated from SCOS GS showed higher expression of M2-type genes, including Tie2, Arg1, Hgf, Pdgfb, Vegfa, Mmp2, and Sdf, than Blank GS and COS GS (Fig. 2G).Conversely, several proinflammatory or anti-angiogenic (that is, M1-type) molecules were down-regulated in SCOS GS-treated Mφs; these included Il1b, Tnfa, Cxcl10, and Ifnb (Fig. 2G). To further clarify the mechanism how SCOS regulated the behav ior of Mφs polarization, we also examined the oligosaccharide-dependent Mφ polarization in vitro.To determine the optimal dosage of oligosaccharide, mouse peritoneal Mφs (mpMφs) were co-cultured with a series concentration of SCOS or COS for 1 d, 3 d, and 7 d.We found that mpMφs co-cultured with SCOS at the concentration of 800 ng/mL significantly enhanced the cell viability of Mφs in vitro on day 3 and day 7; in contrast, COS appeared to have no effect on cell viability of Mφs (SI Appendix, Fig. S4A).Moreover, the live/dead assay also revealed a substantial population of viable cells (indicated by green dots) in the cohort treated with SCOS at a concentration of 800 ng/mL for a duration of 3 d (SI Appendix, Fig. S4B).Thus, the concentration of 800 ng/mL oligosaccharides was screened for the subsequent in vitro experiments.The stimulation of Mφ polari zation in vitro was subsequently confirmed by a flow cytometry anal ysis; the results indicated that both SCOS and COS decreased the in vitro M1 polarization of Mφs, while only SCOS had a discernible effect on M2 polarization of Mφs (SI Appendix, Fig. S5 A-C).It is noteworthy that the overall expression of M1 macrophages was higher western blot analysis revealed that SCOS statically inhibited the NF-κB activation and increased the expression of p-STAT6 (Fig. 2 H-L).We next investigated the inflammation-related gene expres sion of Mφs with the treatment of oligosaccharide for 3 d, including canonical pro-inflammatory genes (Tnfa, Ifny, Il1b, and Il6) and anti-inflammatory genes (Il4 and Il10).Quantitative real-time pol ymerase chain reaction (qPCR) analysis showed that SCOS signif icantly decreased the pro-inflammatory gene expression of Tnfa, while COS up-regulated partial pro-inflammatory genes (Il1b and Il6) (Fig. 2M).In contrast, the expression of the gene associated with anti-inflammatory cytokines (Il4) in Mφs was significantly elevated after the treatment of SCOS for 3 d (Fig. 2M).To further validate the engagement of NF-κB and STAT6 signaling pathways in SCOS-mediated regulation of Mφ polarization, we employed small-molecular inhibitors BAY 11-7821 and AS1517499 to sup press the expression of NF-κB and STAT6 signaling, respectively.Interestingly, both qPCR analysis and immunostainings revealed that the beneficial effect of SCOS on M2 Mφ polarization was nullified by the treatment with the inhibitor targeting STAT6 sig naling (Fig. 2 N and O).Additionally, the inhibition of NF-κB signaling using BAY 11-7821 resulted in a more pronounced enhancement of SCOS-mediated regulation of M2 Mφ polarization (Fig. 2 N and O).Thus, the results indicate that SCOS induces the transition of M1-to-M2 Mφ polarization via a downregulation of NF-κB signaling with the phosphorylation of STAT6 signaling. Conditioned Medium (CM) from SCOS-Treated Mφs Altered the Arteriogenic Behavior In Vitro.To investigate how Mφs affected arteriogenesis, a model of ex vivo aortic ring angiogenesis assay was conducted with the treatment of Mφs and oligosaccharides.We found that SCOS co-cultured with Mφs significantly promoted the formation of microvessels sprouted from the arterial rings (SI Appendix, Fig. S6).Strikingly, obvious tip cells could be observed in the SCOS-treated group, which meant that SCOS and Mφs are beneficial to the formation of new arterioles.Next, to explore the factors in Mφs that influenced the arteriogenic behavior of endothelial cells, we collected the CM pretreated with SCOS or COS to culture mouse arterial endothelial cells (MAECs).With the treatment of SCOS/Mφs CM, the number of sprouts, number of branch points, and total network length in the sprouting fibrin bead assay showed a significant higher level than that with the treatment of other groups (SI Appendix, Fig. S7).Thus, SCOS-stimulated Mφs were conducive to arterial sprouting.Esm1 is a tip cell marker and VEGF-regulated gene product, which efficiently mediates the sprouting during angiogenesis (32).Notably, upregulation of Esm1 and Vegf was also detectable in SCOS/Mφs CM-treated MAECs (SI Appendix, Fig. S8). Tubular network formation and migration of endothelial cells are also the key steps of the arteriogenesis process.Strikingly, all Mφs CM exhibited an inhibited ability to induce the formation of network and migration of endothelial cells compared to the Blank (SI Appendix, Fig. S9).Among them, SCOS/Mφs CM slightly alleviated the inhibition effect on the cell migration (SI Appendix, Fig. S9 D and E).In addition, all CM collected from Mφs with or without the stimulation of oligosaccharides statisti cally restrained the cell viability of MACEs (SI Appendix, Fig. S9F).We also observed that the oligosaccharides themselves did not cause significant cytotoxicity on MAECs at a series concentration (SI Appendix, Fig. S10), which meant the cytokines secreted from Mφs may hinder the arterial function of MAECs. Next, to further verify the role of Mφs involved in SCOS-induced arteriogenesis, we continuously administered clodronate liposomes every 2 d to totally deplete the circulating Mφs and evaluated the simultaneous angiogenesis (SI Appendix, Fig. S11A).Laser Speckle imaging analysis revealed that depletion of circulating Mφs was det rimental to reconstruct collateral circulation and restore blood per fusion (SI Appendix, Fig. S11 B-D).In addition, implants containing Mφ-secreted CM from oligosaccharide stimulation slightly increased the blood perfusion but have no statistical difference relative to that in implants without adding the Mφ-secreted CM (SI Appendix, Fig. S11 B-D).Immunostainings of α-SMA and CD31 also showed a significant impairment in arteriogenesis in all Mφ-depletion mice (SI Appendix, Fig. S11 E-G).The addition of COS/Mφ CM or SCOS/Mφ CM did not rescue the luminal and wall growth in col laterals (SI Appendix, Fig. S11E).Thus, these results reveal that cytokines secreted by Mφs negligibly affect SCOS-induced arterio genesis in ischemia, which prompt us to hypothesize that Mφs might affect arteriogenesis in another manner. Mφs Act as Perivascular Cells to Support Arteriogenesis.In addition to cytokines secretion, evidence that has emerged in recent years point to the fact that Mφs can also serve as perivascular cells to support vessel anastomosis and maturation (33,34).To further the relevance of oligosaccharide-stimulated Mφs for arteriogenesis in ischemia, we selectively and transiently depleted circulating Mφs with clodronate liposomes injection and designed cell transfer experiments (Fig. 3A).We only give intraperitoneal injection of PBS at the initial stage (d0-d3 after implantation) to ensure that the Mφs can efficiently secrete cytokines to stimulate the cell sprouts from existing blood vessels and recruit endothelial cells to the injured sites.Subsequently, clodronate liposomes were continuously injected (d4-d6 after implantation) to totally deplete the circulating Mφs to exclude the influence of Mφ-endothelial cell interactions on directing arterial formation and maturation.Afterward, Mφs pretreated with COS or SCOS in vitro were injected into ischemic limb muscles (d7 after implantation) to examine the regulatory role of Mφs with different treatment in regulating arteriogenesis.Hind limb perfusion measurements by laser Speckle imaging revealed a progressive increase at the implanted site and paw on the first 7 d (Fig. 3 B-D).However, the blood perfusion in all groups showed a visible impairment after the depletion of circulating Mφs, which was improved by transfer of SCOS pretreated Mφs on day 9 (Fig. 3 B-D).In contrast, transfer of Mφs or COS pretreated Mφs had a negligible effect on the promotion of limb perfusion.Immunostainings revealed that α-SMA + cell-encapsulated arteries were difficult to observe in the PBS-injected group with a significant decrease of CD31 + endothelial cells ingrowth into the implants after the depletion of host circulating Mφs, which provided evidence that Mφs are indispensable for arteriogenesis (Fig. 3 E-G).Transfer of Mφs or COS pretreated Mφs also rescued arteriogenesis, although to a lesser extent (Fig. 3 E-G).Interestingly, an adoptive transfer of SCOS pretreated Mφs efficiently rescued arteriogenesis, enabling the formation of mature arteries with a larger inner circumference (Fig. 3E).Notably, we observed that a subset of α-SMA + cells did not participate in arterial assembly following the adoptive transfer of SCOS pretreated Mφs (Fig. 3E), indicating that this approach of adoptive Mφs transfer does not fully rescue the process of Mφassociated arteriogenesis. To better capture a possible interaction between Mφs and blood vessels, we investigated the characteristic distribution and densities of perivascular Mφs.Immunostainings revealed that the numbers of F4/80 + Mφs infiltration into the SCOS-treated Mφs group exhib ited a higher level relative to the other groups after the injection of adoptive Mφs, which indicated that the SCOS-treated Mφs had a more excellent cell viability and migration capacity compared to the other groups (Fig. 3 H and I).Consistently, an abundant density of F4/80 + perivascular Mφs was observed to be located adjacent to pnas.org lectin + blood vessels with the treatment of SCOS and embedded within the laminin + vascular basement membranes relative to that in Mφs and COS-treated Mφs (Fig. 3 H and J).We further analyzed the polarization phenotype of perivascular Mφs and measured the density of M1 or M2 Mφs by their expression of CD197 or CD206.The results showed that the perivascular Mφs in SCOS-treated Mφs were virtually positive for CD206 (SI Appendix, Fig. S12).Addit ionally, expression of genes involved in pericyte differentiation (Tie1, Tie2, Angpt1, and Angpt2) or arterial endothelial markers [Dll1 (Delta-like 1), Notch1, Efnb2, and Sox17] was increased in SCOS-treated Mφs, whereas expression of genes involved in venous endothelial markers (Ephb4, Nr2f2, and Aplnr) was decreased in SCOS-treated Mφs (Fig. 3K).Together, these results suggest that Mφs play a predominant role as perivascular cells in SCOS-induced arteriogenesis. SCOS Regulates Perivascular Mφ-Mediated Arteriogenesis Via a Canonical Notch Pathway.As Notch signaling is a cell-to-cell contact-dependent signaling pathway and positively regulates arterial formation (35,36), we thus explored the relevance between endothelial Notch signaling and perivascular Mφs in the context of arteriogenesis with the induction of oligosaccharide.Immunostainings confirmed that most active Notch1 proteins were strongly localized in lectin + vessels adjacent to perivascular Mφs, and not all intramural microvessels were positive for Notch1 protein expression (Fig. 4 A-C).Notably, the active Notch1 expression on endothelial cells was itself subject to SCOS regulation.The density of Notch1 protein on the surface of lectin + vessels significantly increased with the treatment of SCOS GS, whereas COS GS seemed to have a negligible effect on activating Notch1 expression (Fig. 4 A and D).Comparative gene expression profiling in ischemia with the implantation of SCOS GS revealed high expression of Notch1 and its ligand Dll1 relative to COS GS and Blank GS, while the levels of transcripts for the Notch ligands Dll4 were comparable (Fig. 4E). To further confirm whether Mφs were necessary for activating endothelial Notch signaling, we analyzed the Notch-related gene expression in the direct co-cultured Mφs and MAECs with or without oligosaccharides treatment.mRNA analysis of Notch1, as well as several downstream mediators of Notch signaling (Hes1, Hes5, Hey1, and Hey2), was up-regulated in MAECs with the co-culture of SCOS and Mφs (Fig. 4F).Inversely, the stimulation of SCOS or COS alone did not induce a significant difference in related genes expression of MAECs.Moreover, flow cytometry analysis revealed that SCOS and COS were more likely to bind to Mφs compared to MAECs (SI Appendix, Fig. S13), which meant that the oligosaccharides preferentially stimulated the Mφs rather than arterial endothelial cells.To further address the role of Notch signaling in Mφ-mediated arteriogenesis, we pharmacologically inhibited the expression of Notch by administration of the γsecretase inhibitor DAPT.As the results mentioned above showed no significant effect of COS GS on Mφ-mediated arteriogenesis, we focus on SCOS GS as a potential regulator of Notch signaling for further study.Laser Speckle imaging analysis revealed that the blood perfusion at the implanted site progressively increased over time with the treatment of DAPT (Fig. 4 G and H).In contrast, exposing implants to the DAPT caused a statistically inhibition in blood flow reconstruc tion at the paw (Fig. 4 G and I).A previous study has established that inhibition of Notch signaling directs tip-derived endothelial cells into developing veins, while activation of Notch signaling couples sprouting angiogenesis and artery formation (35).The increase of blood perfusion in the ischemic site was attributed to the large number of capillaries' ingrowth induced by DAPT, while the loss of Notch disrupted the reconstruction of collateral circu lation and thus led to impaired blood flow recovery in the hind limb.Strikingly, we found that SCOS GS had no significant improvement in restoring blood perfusion compared to Blank GS after the treatment of DAPT (Fig. 4 G-I).Similarly, histomor phometric analysis of arteriogenesis after the treatment of DAPT showed a significant impairment in luminal and wall growth in collaterals (Fig. 4J).Immunostainings analysis also revealed that inhibition of Notch signaling weakened the capacity of SCOS GS in inducing α-SMA + cells surrounded arteries, and SCOS GS exhibited no significant difference in inducing angiogenesis and artery formation compared to Blank GS with the treatment of DAPT (Fig. 4 K-M).Together, these results suggest that inhibi tion of Notch signaling restricts the promotion of SCOS in induc ing arteriogenesis. As SCOS could efficiently promote the polarization of M1-to-M2 Mφs with an obvious down-regulation of inflamma tory response, we next investigated the essential role of inflamma tory regulation in SCOS-induced arteriogenesis.Cytokine analysis of supernatant from harvested SCOS GS confirmed a widespread down-regulation of numerous inflammatory cytokines (most nota bly IL-1β, IL-16, IP-10, RANTES, and SDF-1) with the treatment of DAPT (Fig. 4 N and O).In addition, we found that the level of monocyte chemoattractant protein-1 and macrophage colonystimulating factor in the DAPT-treated group was decreased rela tive to that in SCOS GS, which is the main chemokines for recruitment and differentiation of Mφs.These results again con firm the importance of Mφs in arteriogenesis, and the secreted cytokines is not the most important factor in this process. Notch is a cell-surface receptor that assembled by O-glycans attached to epidermal growth factor-like (EGF) repeats in its extra cellular domain (37,38), in which protein O-fucosyltransferase (POFUT) plays an important role in regulating the binding of the Notch ligand.To gain further insight into the mechanism how SCOS affected arteriogenesis via a canonical Notch signaling, we examined the gene expression of Notch-related O-glycosylation.In the endoplasmic reticulum, the EGF repeats in the Notch receptor extracellular domain are properly modified with O-N-acetylglucosamine (GlcNAc) by EOGT (epidermal growth factor-domain specific O-GlcNAc transferase) and O-glucose by POGLUT (protein O-glucosyltransferase) (39).QPCR analysis revealed that SCOS treatment substantially up-regulated the expression of Poglut2 and hardly affected the Eogt (Fig. 4P), indi cating that SCOS active O-glucose-related glycosylation in Mφs.To examine the effect of Poglut2 on SCOS-related glycosylation, we utilized a co-culture system of mpMφ/MAEC to validate the influence of Poglut2 on SCOS-mediated activation of Notch sig naling.In comparison with the Notch signaling triggered by exogenous DLL1 on the surface of mpMφs and MAECs, SCOS-induced Notch signaling predominantly manifested on the surface of F4/80 + mpMφs (SI Appendix, Fig. S14).As anticipated, silencing the expression of the Poglut2 gene effectively hindered the SCOS-induced activation of Notch1 in mpMφs (Fig. 4Q and SI Appendix, Fig. S14).Additionally, we found that disruption of glycosylation by Poglut2 siRNA in mpMφs also markedly inhib ited the expression of arterial endothelial markers in MAECs and exhibited a similar level compared to the groups of Mφs and Blank in mpMφs/MAECs co-culture system (Fig. 4R), which indicated that the inhibition of the glycosylation of Notch protein in Mφs restricted the SCOS-mediated arteriogenesis.Together, these results suggest that SCOS regulates perivascular Mφ-mediated arteriogenesis via a canonical Notch pathway. SCOS Rescues Cardiac Function in a Model of Myocardial Infarction (MI).Given the potential utility of SCOS to rescue perfusion in the ischemic limb, we sought to investigate whether this oligosaccharide could improve outcomes in the more demanding setting of MI, a representative ischemic model that also involved in Mφ-associated pro-angiogenesis and wound healing (40,41).To verify the effects of SCOS on the remodeling after MI, we generated a photo-cross-linked hyaluronic acid (HA) hydrogel mixed with SCOS and compared the repair capability to healthy mice (Sham) and MI mice that received HA hydrogel injection alone.At 28 d after the surgery, terminal histological analysis with Masson's trichrome staining revealed that treatment with SCOS HA significantly reduced the collagen density in the infarct area and sustained the thickness of the interventricular septum (Fig. 5A).Immunostainings of CD31-positive vessels in the ischemic zone of the cardiac tissue indicated a substantial increase in α-SMA + cell-encapsulated arteries ingrowth in SCOS-treated infarcted hearts compared to that in HA group (Fig. 5 B and C).Furthermore, echocardiographic analysis was administrated to determine cardiac functional parameters on day 14 and day 28 after MI (Fig. 5D and SI Appendix, Fig. S15).As a cardiac function indicator, left ventricular ejection fractions (LVEFs) in empty HA hydrogel-treated MI heart exhibited a half value of the sham group (Fig. 5E).In contrast, SCOS HA resulted in partial rescue of cardiac function, with LVEFs and left ventricular fractional shortening (LVFS) close to that measured in the sham controls (Fig. 5 E and F).Immunostainings revealed that treatment with the SCOS HA efficiently reduced myocardial apoptosis (Fig. 5G).With the treatment of SCOS HA, the reentry of cardiomyocytes into the cell cycle was unambiguously confirmed by analysis of PCM1-expressing cardiomyocytes that were positive for EdU and exhibited Aurora B at the midline between cardiomyocyte nuclei (Fig. 5H and SI Appendix, Fig. S16).These results suggest substantial rescue of cardiac function with SCOS treatment following a major ischemic injury in MI mice. To further validate the role of Mφs in MI regulated by SCOS, we investigated the impact of SCOS on Mφ polarization in models of MI.Flow cytometry analysis revealed a significant increase in the proportion of Mφs in the MI model compared to the sham group, while no significant difference was observed between the SCOS HA group and the HA group (SI Appendix, Fig. S17 A and C).A swift of M1-to-M2 Mφ polarization with an obvious expres sion of activated Notch1 in CD206 + Mφs was also observed in SCOS-treated infarcted hearts (SI Appendix, Figs.S17 B, D, and E and S18), which was consistent with the aforementioned find ings in limb ischemia models.The collective findings suggest that the activation of Notch signaling by SCOS promotes Mφ-associated arteriogenesis as a potential therapeutic approach for MI. Discussion Implants that efficiently induce blood vessel growth into ischemic sites and rescue sufficient circulation is a prerequisite process for the treatment of ischemic disease (3,4,9,42,43).Mφs, as one of the first responders to injury, exhibit remarkable plasticity that enables them to efficiently respond to implant signals and facilitate collateral vessel remodeling (44,45).Therefore, there is a pressing need to develop a potential biomaterial-based therapeutic strategy that can effectively activate Mφs and thus induce arteriogenesis.Here, our study demonstrates that SCOS can serve as a proangiogenic agent by regulating perivascular Mφs through canon ical Notch signaling, promoting arteriogenesis in both mice models of lower limb ischemia and MI (SI Appendix, Fig. S19).During ischemic neovascularization, Mφs commonly regulate endothelial cells through two mechanisms: secreting cytokines to stimulate the sprouts of endothelial cells or acting as perivascular cells to support the maturation of blood vessels.Our findings suggest that the Notch signaling induced by SCOS between perivascular Mφs and arterial endothelial cells primarily controls the formation of mature and highly developed arteries within the ischemic cavity, rather than the cytokine-mediated endothelial cell behavior.Depletion of circulating Mφs significantly abrogated the formation of arteries and reconstruction of collateral circula tion.Notably, an adoptive transfer of SCOS-treated Mφs restored vascularization in Mφ-depleted mice, while administration of SCOS-treated CM had no discernible effect on promoting the regeneration of arteries.This biomaterial, which promotes arteri ogenesis without the use of extra growth factors, represents a promising therapeutic approach for ischemic diseases. Accumulating evidence suggests that Mφs can be classified into two distinct subpopulations: a canonical pro-inflammatory phe notype (M1) and an alternative pro-healing phenotype (M2), and M2 Mφs can further be subdivided into M2a, M2c, and M2f (13).Although different subtypes of M2 Mφs play varying roles in directing arteriogenesis, it is believed that CD206-positive M2a Mφs are particularly relevant to arteriogenesis due to their perivas cular positioning and ability to induce collateral circulation recon struction during ischemia (46).Apart from that, Mφs and its polarized phenotype-related differential cytokines secretion in particular are responsible for collateral vessel remodeling during arteriogenesis (11,(47)(48)(49).In the present study, the indispensable role of Mφs in inducing SCOS-mediated arteriogenesis did not preclude participation of the cytokines secreted by Mφs in ischemic mice.Nevertheless, our findings demonstrated that sup plementation of COS or SCOS-stimulated Mφs CM at a late stage had no appreciable effect on the collateral vessel reconstruc tion and blood flow recovery in the absence of Mφs.Additionally, we observed that mpMφ-stimulated CM exerted a negative reg ulatory effect on arterial cell-related behaviors in vitro.As previous studies have reported that excess secretion of inflammatory cytokines from Mφs will also promote lipoprotein retention, degrade the extracellular matrix, and eventually lead to athero sclerosis (50,51), we speculated that the pro-inflammatory cytokines secreted from Mφs may also have restricted the process of arteriogenesis.Although SCOS/Mφs CM efficiently alleviated the Mφ-related inhibition of inflammation by regulating Mφ polarization to M2 via the STAT6 signaling pathway, its remark able promotion of arteriogenesis in vivo was not consistently observed.Our data collectively indicate that the cytokines released from Mφs, particularly during the later stages, are unlikely to be primarily responsible for the SCOS-related arteriogenesis in ischemia. In addition to secreted molecular pathways, the behavior of endothelial cells is also controlled by cell-to-cell communication parameters, which has roles in angiogenesis and numerous vascular pathologies.Emerging evidence indicates that a distinct population of perivascular Mφs residing proximal to blood vasculature is crucial for improving blood flow regulation and functional recovery of ischemic tissues (34,46,52).The perivascular Mφs have been estab lished to promote vessel stabilization, which in turn functions as a barrier for potential harmful blood-born substances in the tissues.However, their involvement in biomaterial-mediated arteriogenesis has not yet been described.Here, we have demonstrated that SCOS provoked the accumulation of CD206-positive expressed perivas cular Mφs along with arterioles during collateral vessel remodeling in ischemia.A previous study has reported that perivascular Mφs critically induced venous-to-arterial differentiation with the pres ence of arterial vascular smooth muscle cells in the parenchyma (52).Unlike the SCOS/Mφs CM induction, adoptive transfer of SCOS-stimulated Mφs significantly restores the blood perfusion and vasculature reconstruction in Mφ-depleted mice, which con firmed that this cell-to-cell communication played a vital role in inducing arteriogenesis. The process of arteriogenesis is always associated with the biomechanical-mediated signaling pathway.Once an injury occurs, instructive cues expressed by local resident immune cells orchestrate the injury response of endothelial cells or other resident progenitor cells during tissue regeneration, which is partly mediated by endothelial-specific expression of Notch or integrin signaling.Previous studies have shown that the endothelial Notch signaling pathway fosters Mφ matura tion during ischemia, which in turn promotes arteriogenesis (14).This study showed that activated Notch highly expressed between perivascular Mφs and arterial endothelial cells in SCOS GS, which was able to coordinate arteriogenesis and tissue regeneration in ischemia.Notch is a cell-surface recep tor that is regulated by O-glycans attached to EGF repeats in its extracellular domain (37).The binding of Notch ligands is regulated by POFUT, POGLUT, or EOGT (38,53,54).Notably, SCOS significantly improved the expression of Poglut2, and our experiments with Poglut2 siRNA-treated Mφs abrogated the glycosylation of Notch receptors and sta tistically reduced the arterial-related gene expression in MAECs.Using a Notch inhibitor DAPT resulted in a dys functional vasculature and hindered the promotion of SCOS in arteriogenesis.Therefore, we conclude that SCOS is directly involved in the O-glucose or O-GlcNAc modifications within EGF repeats via the glycosylation of POGLUT2, leading to increased ligand-induced Notch signaling and enhanced ther apeutic arteriogenesis in ischemia. In summary, we here described a semisynthesized sulfated oligosaccharide that efficiently induced arteriogenesis and rapidly achieved blood flow penetration in ischemia.We also revealed an approach for biomaterial regulating perivascular Mφ-mediated arte riogenesis, which is different from conventional cytokine-mediated angiogenesis.Although information on the definite mechanism of how SCOS involved in the synthetic process of EGF repeats and thus regulates Notch activity was not experimentally demonstrated in this study, it was clear that SCOS could recruit the local resident Mφs to transform into perivascular Mφs and induce arteriogenesis by activating the expression of Notch signaling within the Mφ-to-endothelial cell communications.We believe that these find ings provide insight into the Mφ-mediated arteriogenesis induced by immunomodulation of SCOS and may be adopted as a potential therapeutic target for ischemic defects and diseases. Fig. 1 . Fig. 1.SCOS promotes arteriogenesis and blood reperfusion in hind limb ischemia.(A) Illustration of the preparation of SCOS GS and implantation into the mice hind limb ischemia model.(B) Laser Speckle imaging of ischemic implanted sites and paw over time.(C and D) Quantification of perfusion levels of the ischemic sites (C) and paw (D) (n = 4).*P < 0.05 vs. Blank GS; # P < 0.05 vs. COS GS.C.L. norm., contralateral (non-ischemic) limb normalized.(E-G) Representative angiographic images (E) and quantitative analysis (F and G) of relative blood vessel volume (BVV/TV) and vascular number (Vs.num.) in implants on day 7 and day 14 (n = 4).(Scale bars, 1 mm.) (H and I) Representative immunostainings (H) and quantitative analysis (I) of α-SMA + arteries in implants (n = 5).(Scale bars, 100 μm.) (J) Quantification of the dissolved oxygen levels at the ischemic sites over time, with a detection depth of 3 mm.The dotted red line and red number represent the quantification of intramuscular dissolved oxygen in the hind limb of the sham group.(K) Hematoxylin and eosin (H&E) staining of implants on days 7 and 14 after implantation in the ischemic hind limb.(Scale bars, 100 μm.) Blank GS, PBS immersed gelatin sponge; COS GS, COS-coated gelatin sponge; SCOS GS, SCOS-coated gelatin sponge.Data represent mean ± SD. *P < 0.05, **P < 0.01, ***P < 0.005, and ****P < 0.001; ns, not significant [(C and D), one-way ANOVA with Tukey's post hoc test; (F, G, and J), two-way ANOVA with Tukey's post hoc test]. Detailed materials and methods are provided in SI Appendix, Materials and Methods, including the Materials; Mouse Hind Limb Ischemia; Micro-CT Analysis; Mouse MI Model; Echocardiography; Histology and Immunofluorescence Analysis; Flow Cytometry Analysis; Cell Isolation and Culture; Mφs Polarization; Western Blot Analysis; Rat Aortic Ring Angiogenesis Assay; Conditioned Media Collection from Mφs; Sprouting Fibrin Bead Assay; Tube Formation Assay; Scratch Wound Assay; Mφs Depletion; Adoptive Transfer Experiments; Poglut2 Gene Silence Experiment; Quantitative RT-PCR Analysis; Cell Binding Assay; and Statistical Analysis.All surgical procedures were approved by the Institutional Animal Care and Use Committee of East China University of Science and Technology.All study data are included in the article and/or SI Appendix. Data, Materials, and Software Availability.
2023-11-11T06:18:32.181Z
2023-11-09T00:00:00.000
{ "year": 2023, "sha1": "86ad8591b2b6a2221ea39bcd45aebc6ef2cb1f61", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1073/pnas.2307480120", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c0676babec7c0cfc73c4f40a74a7741cf8175e1b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234767054
pes2o/s2orc
v3-fos-license
Management of a High-Risk Surgery with Emicizumab and Factor VIII in a Child with a Severe Hemophilia A and Inhibitor The recent development of a humanized, bi-speci fi c, and monoclonal antibody mimicking the function of activated factor VIII was a revolution in the management of patients suffering from severe hemophilia A with inhibitors. 1 The phase III randomized studies have shown a more ef fi cient prophylaxis of this subcutaneous administered drug in these patients compared with recombinant FVIIa (rFVIIa) and activated prothrombin complex concentrates (aPCC). 2,3 Nonetheless, there are “ real life ” matters that need to be explored in this new era of managing hemophilia patients, such as surgery management under emicizumab, especially in children. 4 – 6 Here, we report the fi rst case, to our knowledge, of major orthopedic surgery managed with factor VIII The recent development of a humanized, bi-specific, and monoclonal antibody mimicking the function of activated factor VIII was a revolution in the management of patients suffering from severe hemophilia A with inhibitors. 1 The phase III randomized studies have shown a more efficient prophylaxis of this subcutaneous administered drug in these patients compared with recombinant FVIIa (rFVIIa) and activated prothrombin complex concentrates (aPCC). 2,3 Nonetheless, there are "real life" matters that need to be explored in this new era of managing hemophilia patients, such as surgery management under emicizumab, especially in children. [4][5][6] Here, we report the first case, to our knowledge, of major orthopedic surgery managed with factor VIII infusions in a child with inhibitor receiving emicizumab. Our patient is a 9-year-old boy with severe hemophilia A. He presented an inhibitor after the 10th exposed day with a factor VIII (FVIII) concentrate (historic peak: 412 BU/mL). Thereafter, he presented an important intracranial hemorrhage (subdural and intraparenchymal hematomas). The first treatment with rFVIIa (Novoseven, 90 µg/kg/2 h) and tranexamic acid was inefficient. It was therefore switched with aPCC (FEIBA 80 IU/kg/8 h) that was rapidly effective. He also presented some knees and left hip hemarthrosis responsible for arthropathies. Several attempts of immune tolerance induction through Porta-Cath failed whatever the FVIII concentrate used, recombinant FVIII or plasma-derived FVIII. Because of an insufficient efficacy of rFVIIa for totally stopping some bleeding events, a prophylaxis with aPCC was performed with daily infusion of 80 IU/kg. Given the cumbersome nature of this treatment, emicizumab (Hemlibra) was introduced instead of aPCC in December 2018. Emicizumab was started with 3 mg/kg/week for 4 weeks and then adapted to 1.5 mg/kg/week. The boy had no more bleeding including clinically evident hemarthrosis. However, the left hip arthropathy worsened with aggravation of limping and pain leading to the use of a wheelchair. Hip X-ray showed an osteonecrosis of the left femoral head requiring a femoral varization osteotomy that was planned in June 2020. Because the inhibitor titer was low at 2 BU/mL, rFVIII-Fc (Elocta) was administered to normalize the coagulation during the surgery. A rFVIII-Fc bolus with 6,000 IU (150 IU/kg) was initially administered, followed by a continuous infusion 12 IU/kg/h (►Fig. 1A). The emicizumab treatment was continued (1.5 mg/kg/week). This complex surgery performed on rFVIII-Fc required a close monitoring with frequent FVIII:C measures to adjust rFVIII-Fc administrations. All FVIII:C levels were measured with a chromogenic method (STA-R Max3 analyzer ; Trini-CHROM FVIII:C reagent -Stago) by using bovine coagulation factors X and IXa that do not interfere with emicizumab. [7][8][9] In same plasma samples, thrombin generation assays (TGA) were performed by using low concentration of tissue factor (1 pM) as suggested for the TGA in hemophilia A patients. 10 TGA were performed before the surgery without rFVIII-Fc and throughout rFVIII-Fc infusions (►Fig. 1B). The FVIII:C level measured 30 minutes after bolus was high at 242% and maintained above 200% during the first 24 hours, following postsurgery without change of the rFVIII-Fc infusion rate (►Fig. 1A). The orthopedic surgery included a proximal femoral derotation osteotomy and then fixed by osteosynthesis material (►Fig. 2). No bleeding (or other complication) occurred during and after surgery with low per-operatory blood loss (200 mL). His immediate postsurgery hemoglobin level was 11.8 g/dL. After the first postsurgery day, the FVIII:C levels progressively decreased until 100% at H28. At the 78th hour, a 50 IU/kg rFVIII-Fc bolus was performed for drains ablation. After a new peak, the FVIII:C level further decreased despite maintaining the continuous infusion at the dose of 10 IU/kg/h because of the inhibitor resurgence at H162 with a titer of 1.8 BU/mL which rose to 33 BU/mL at H200. TGA's parameters evolved within normal ranges during first postsurgical 24 hours and then progressively decreased together with FVIII: C levels (►Fig. 1B). The 6th postsurgery day, rFVIII-Fc was replaced with rFVIIa boluses at 90 µg/kg that were stopped after 1 day. During his stay, the patient did not bleed and therefore did not require any red blood cell concentrate transfusion; the lowest hemoglobin he presented was 7.9 g/dL and was only corrected with a single iron sucrose injection (Venofer). Furthermore, no markers of coagulation activation were detected during rFVIII-Fc and rFVIIa treatments. Always receiving a prophylaxis with emicizumab alone, the patient was able to walk within 2 months after surgery and presented no bleeding. To date, most of the surgery reports performed with emicizumab in cases with inhibitor, concerned adults. rFVIIa was most used to prevent bleeding during and after surgical procedures. In pediatrics, only minor procedures were described. [4][5][6][11][12][13] Our case is so, to our knowledge, the first description of a major orthopaedic surgery managed with FVIII concentrates in a child with severe hemophilia A with inhibitor while receiving a prophylaxis with emicizumab. The choice we made of using rFVIII-Fc for bleeding prevention during and after the surgery was foremost driven by the presence of a low titer inhibitor. Furthermore, we prohibited aPCC because of its thrombotic risk in association with emicizumab. 14 Finally, Letter to the Editor e164 rFVIIa was ruled out because it was often only partially effective for this patient. We show here that, as for some adults reported to date, major surgeries can be safely managed with FVIII concentrates in children with severe hemophilia A with inhibitor while receiving emicizumab. This dual treatment could also be applicable without an inhibitor at the doses usually administered. 15 In our patient, the FVIII:C chromogenic assay appeared to be a reliable tool for peri-surgical monitoring in children despite concomitant treatment with emicizumab as it was proposed. 8,9 TGA with low TF concentration performed in parallel to FVIII:C measures could be helpful. 9 However, our results did not show a perfect correlation between TGA's parameters and normalized FVIII levels. The TGA with these conditions is so not enough reliable for the monitoring of emicizumab treatments and need further adjustments. Finally, it was rapidly observed that maintaining the prophylaxis with emicizumab reduces the duration of post-surgical treatment in major orthopaedic surgeries. 13 Indeed, for our patient, as already reported for adults receiving emicizumab, FVIII or rFVIIa injections were only necessary until the 7th day postsurgery. Thereafter, prevention with emicizumab alone was sufficient to protect against late bleeding during rehabilitation.
2021-05-19T05:16:36.495Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "adf4652e4d5f4abb514e83812afd5aca52522ea4", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0041-1728667.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "adf4652e4d5f4abb514e83812afd5aca52522ea4", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
236223052
pes2o/s2orc
v3-fos-license
A Cooperative Mobile Robot and Manipulator System (Co-MRMS) for Transport and Lay-up of Fibre Plies in Modern Composite Material Manufacture Composite materials are widely used in industry due to their light weight and specific performance. Currently, composite manufacturing mainly relies on man-ual labour and individual skills, especially in transport and lay-up processes, which are time-consuming and prone to errors. As part of a preliminary investigation into the feasibility of deploying autonomous robotics for composite manufacturing, this paper investigates a cooperative mobile robot and manipulator system (Co-MRMS) for material transport and composite lay-up, which mainly comprises a mobile robot, a fixed-base manipulator and a machine vision sub-system. In the proposed system, marker-based and Fourier transform-based machine vision approaches are used to achieve high accuracy capability in localisation and fibre orientation detection respectively. Moreover, a particle based approach is adopted to model material deformation during manipulation within robotic simulations. As a case study, a vacuum suction based end-effector model is developed to deal with sagging effects and to quickly evaluate different gripper designs, comprising of an array of multiple suction cups. Comprehensive simulations and physical experiments, conducted with a 6-DOF se-rial manipulator and a two-wheeled differential drive mobile robot, demonstrate the efficient interaction and high performance of the Co-MRMS for autonomous material transportation, material localisation, fibre orientation detection and grasping of deformable material. Additionally, the experimental results verify that the presented machine vision approach achieves high accuracy in localisation (the root mean square error is 4.04 mm) and fibre orientation detection (the root mean square error is 1 . 84 ◦ ). Introduction Due to the interesting properties and high strength-to-weight ratio, the applications of composite materials have raised considerably in the last decades [1,2]. They are usually made of multiple plies of fibres (e.g. carbon, glass and/or synthetic fibres), layered up in alternating orientations and held together by resin [3]. Therefore, the laying-up of fibre plies is the fundamental manufacturing phase in the production of composite materials. It is usually performed by human operators, who handle and transport the raw materials, making composite manufacturing time-consuming, labour intensive and prone-to-errors. The demand for the phasing in of robotic solutions to improve process efficiency and increase operator safety has grown significantly. Automated Tape Laying (ATL) [4] and Automated fibre Placement (AFP) [5] are two popular automated technologies employed in automotive lay-up of composite material. However, limited by the heavy cost of specialised equipment and low flexibility, they are only suitable for making small composite parts [6]. Up to now, investigations on the use of commercially available robotic platforms for composite lay-up are on the rise in composite manufacturing. Previous works have investigated the viability of using robotic systems in advanced composite manufacturing by exploiting the flexibility of robots to meet the stringent demands of manufacturing processes. In [7] and [8], complete systems for handling and laying up prepreg on a mould were developed. Robotic workcells were demonstrated with different modules. Bjornsson et al. [9] surveyed pick-and-place systems in automated composite handling with regards to handling strategy, gripping technology and reconfigurability etc. This survey indicated that it is hard to find generic design principle and the best solution for handling raw materials for composite manufacture depends on the specific case study. Schuster et al. [10,11] demonstrated how cooperative robotic manipulators can execute the automated draping process of large composite plies in physical experiments. Similar research has been done by Deden et al., who also addressed the complete handling process from path planning and end-effector design to ply detection [12]. Szcesny et al. [13] proposed an innovative approach for automated composite ply placement by employing three industrial manipulators, where two of them were equipped with grippers for material grasping and the third manipulates a mounted compaction roller for layer compression. A comparable hybrid robot cell was developed by Malhan et al. [14,15], where rapid refinement of online grasping trajectories was studied. Despite these advances, cooperative/hybrid robotic systems involving mobile robot platforms and fixed-base robotic manipulators have received little attention in the context of advanced composite manufacturing. Due to the requirement of accurate localisation and fibre orientation detection, an efficient vision system is of great importance for autonomous robotic system in advanced composite manufacturing. Fibre orientation detection is challenging due to the high surface reflectivity and fine weaving of the material, and thus it has still predom-Title Suppressed Due to Excessive Length 3 inantly been accomplished manually in practice [16,17]. Traditional machine vision methods for fibre orientation detection of textiles prefer to utilize diffused lighting [18], such as diffuse dome [19] and flat diffuse [20] illumination measuring techniques. Polarisation model approaches have been particularly popular for measuring fibre orientation, where contrast between textile features such as fibres and seams are used to identify the structure of the material relative to the camera [21]. The method presented in [22] used a fibre reflection model to measure fibre orientation from an image and achieved good accuracies and robustness for different types of surfaces. However, when considering the specific application of advanced composite manufacturing, changes in lighting conditions are often unavoidable because of the moving shadow of the robot arm cast on the material. The integration of vision systems with robotics was considered by only few of the previous works. This means systems are inflexible as they are unable to cope with dynamic variations within advanced composite manufacturing processes. In composite manufacturing, material transport and composite lay-up have not been integrated into a single autonomous robotic system, which is challenging due to the many technologies involved, including path planning, material detection and localisation, etc. Achieving this requires the development of a strategy that combines different modules in a flexible system and provides autonomous material transportation and sufficiently-accurate material handling capabilities. This paper presents a case study on robotic material transportation and composite lay-up, which is based on a real-world scenario commonly found in advanced composite manufacturing. Compared to previous works, this research addresses specific challenges that arise from the introduction of different robots that must be coordinated along with the complex set of tasks covering transport, detection, grasping and placement of deformable material for composite manufacturing applications. The aim of this research is to conduct a pilot study on the feasibility of deploying a cooperative robotic system to perform a series of tasks in composite material manufacturing. Therefore, a cooperative mobile robot and manipulator system (Co-MRMS), which consists of an autonomous mobile robot, a fixed-base manipulator and a machine vision sub-system is presented in this paper. The mobile robot transports the material autonomously to a predefined position within the working range of the fixed-base manipulator. The machine vision sub-system then detects the location of the material and estimates the fibre orientation to enable the manipulator to accurately handle the material. This is achieved by employing an ArUco marker detection algorithm [23] to compute the position of the material, and a Fourier transform-based algorithm [24] combined with a least squares line fitting method [25] to calculate the material's fibre orientation. Afterwards, the manipulator accurately grasps the material and places it onto a mould. Simulated trials and physical experiments are conducted to verify the cooperation behaviours of the Co-MRMS and quantify the accuracy of the vision system. In addition, modelling of flexible deformable objects has been one of the most researched topics in robotics, such as cables and fabrics. To realistically simulate the interactive behaviour between robot actions (i.e. grasping and transfer actions) and material deformation, various techniques do exist to model deformable objects. In [26], recent advancement of different types of flexible deformable object modelling for robotic manipulation, such as physical-based and mass-spring modelling, was reviewed. Moreover, the approaches of building up deformable object models were presented. Researchers in [27] established a model for deformable cables and investigated robotic cable assembling, addressing collision detection issues. This study adopted particle-based modelling approach [28] to model material deformation within simulation when composite material is grasped and transferred by a manipulator. Another issue of automated handling composite material is end-effector design. Until now, a number of grippers, such as grid gripper and suction cup gripper, have been designed. Suction cup grippers could handle deformable objects without damaging the material and are flexible enough to drape different shapes of composite material to flat or curved moulds. Gerngross et al. [29] developed suction cup based grippers for handling prepregs in Offline programming. The solution of automated handling dry textiles to double curvature mould were verified both in offline programming environment and an industrial scale manufacturing demonstrator. Ellekilde et al. [30] designed a novel draping tool with up to 120 suction cups, which has been tested on draping large aircraft part prepreg. Krogh et al. [31] researched the moving trajectories of suction cup gripper for draping plies with establishing cable model. Therefore, a vacuum suction-based end-effector model is developed in this work to simulate sagging effects during grasping, which provides a useful simulation tool for quickly evaluating different gripper designs comprising of an arrangement of multiple suction cups. The remaining parts of the paper are organised as follows. First, the framework of the Co-MRMS, the modelling strategy for the interaction with deformable objects and machine vision approaches are described in Section 2. Then, the details of the experimental setup are outlined in Section 3. Section 4 discusses the Co-MRMS evaluation through physical experiments, while Section 5 is devoted to a discussion on the findings, limitations and future directions of the work. Finally, the conclusions are provided in Section 6. From a hardware perspective, the proposed Co-MRMS involves four components: a mobile robot, a fixed-base robotic manipulator, a vision system and a host PC. The framework of the Co-MRMS is presented in Fig. 1. The mobile robot is responsible for transporting the composite material from a given starting location within the work shop floor (e.g. the storage area) to the robotic manipulator. Aided by the vision system, the estimated position and orientation of the raw material are sent to the fixed-base robot manipulator via the host PC. The manipulator is used for grasping each fibre ply and placing it correctly according to the designed lay-up manufacturing specifications. Robotic path planning for both robots was implemented in MAT-LAB® [32]. Image processing algorithms were developed by using OpenCV [33], an open source computer vision and machine vision software library that provides a common infrastructure for computer vision applications and accelerates the devel-Title Suppressed Due to Excessive Length 5 opment of machine perception capabilities. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilise the library and modify the code. Simulations of the entire process were developed using the CoppeliaSim robotic simulator [34] while the integration of the Co-MRMS was implemented via ROS (Robot Operating System) [35]. Deformable object modelling and suction cup end-effector design approach To simulate the interactive behaviour between robot actions (such as grasping and transfer actions) and material deformation, various techniques do exist to model nonrigid bodies (i.e. deformable objects), including the finite element method, massspring systems and numerical integration methods [36][37][38]. However, the modelling process cannot be performed within (or transferred effectively to) the simulation platforms developed to simulate the physical motion of robots in both static and dynamic conditions. Thus a method based on the particle-based system [28] is developed in this research to simulate the draping behaviour of composite material within the Cop-peliaSim robotic simulator. The non-rigid characteristics of composite material was modelled through an array of individual cuboids and associated dummies. A simple 3×3 modelling example of composite material is presented in Fig. 2. Note that these primitive shapes individually behave as rigid bodies. As shown in the Figure, the dummies are attached to cuboids and linked by dynamic constraints linkages. Once the structure is disturbed by an external force, the relative motion between adjacent dummies are constrained by the linkages and material deformation behaviour including bending and stretching are emulated. Note that this approach is a simplified modelling of the particle-based system and does not model shear effects. It is therefore important to note that this model is not intended to replace the more realistic modelling of draping behaviours achieved by the other methods described earlier. Instead, it provides an approximation of the draping effects for visual simulation of the interactions between robot manipulation and draping within a single comprehensive robotic simulation environment for evaluating the high-level behaviours of the Co-MRMS. Material stiffness parameters have crucial effects on modelling deformable objects. The method presented above for modelling deformable objects in CoppeliaSim enables the stiffness of the overall material to be adjusted by tweaking two different types of model parameters: principle moments of inertia and individual primitive cuboids dimensions. Increasing either the dimensions of the cuboid or the principle moments of inertia produces a higher stiffness material, while reducing either parameters leads to lower stiffness. The parameters chosen in this work are presented in Table 1. Having developed an approach to model composite material as a non-rigid, deformable body within CoppeliaSim, it is also necessary to develop a model for the vacuum suction-based end-effector. CoppeliaSim's default library provides a simple vacuum suction cup model that enables the simulation of vacuum suction grasping for the manipulation of rigid bodies. However, without any modifications, this model cannot realistically interact with the composite material model as it is developed to grasp only a single rigid body within the simulation environment. When used to grasp the simulated composite material, a numerical-method-induced sagging effect would occur around the vacuum suction cup as the end-effector would pick up the de- formable object from a single point corresponding to one cuboid. In reality, however, a suction cup gripper should maintain contact with the entire region of cloth directly underneath the suction cups. Therefore, the default suction cup model was modified to enable compatibility with the approach to modelling deformable materials by ensuring more proper contact behaviour between all elements that lie within the grasp region of a suction cup during grasping operations. The modified suction cup gripper with four suction cups, provides a useful simulation component for quickly evaluating different gripper designs comprising of an arrangement of multiple suction cups. This is an important resource for future design processes that seek to minimise sagging effects during the transfer of composite material sheets of a known shape and size. The capability of the robot end-effector to deal with ply sagging was tested in simulation environment. Fig. 3 shows images of example simulation involving the use of a 4-cup and single cup vacuum suction gripper to transfer a sheet of composite material across the workspace. Compared with single cup vacuum suction gripper, 4-cup vacuum suction gripper reduced the sagging effects significantly, reaching satisfactory performance in dealing with ply sagging. It should also be noticed that each suction cup maintains complete contact with the material and sagging effects are minimised in the convex region defined by the four contacts between the gripper and the composite material. Localisation and fibre direction identification approach The aim of the machine vision system is to detect and locate the composite material and identify the orientation of the fibre in the work space according to the requirements of composite material manufacturing processes. The extracted position and orientation of the material are provided to the host PC, which uses the information to plan target coordinates for the robot arm to grasp the composite material transported by the mobile robot. Generally, the position of the material could be approximated continually using wheel encoders of the mobile robot, but substantial error is accumulated over time due to the wheel slippage. This can be compensated by the vision system which provides higher accuracy position information relative to the manipulator end-effector frame and it is necessary to enable accurate localisation of composite material. This corrected position estimation can further be used to eliminate the build-up of error within the wheel odometry-based localisation system. Thus machine vision plays a crucial role in the Co-MRMS developed for composite material manufacturing. By combining both of the machine vision and wheel odometry-based data, the proposed localisation system could be robust and accurate. Yet the application requires an approach for object detection that is robust to variations in the size and shape of the material. To overcome these challenges, first, a marker based approach is adopted to enable the Co-MRMS to locate the material accurately. Then, a method for accurate and robust fibre orientation detection is developed. Here it is assumed that the relative position between the marker and the material is fixed. By locating the marker, the position of the material can be inferred from the relative position between the marker and material. This provides the Co-MRMS with a higher accuracy estimation of the position of the fibre material, which does not accumulate error over time. Then, the orientation of fibre is detected to support the composite lay-up process. More details will be given in following sections. Localisation approach As shown in Fig. 4, this work uses a single ArUco vision marker for material localisation material. The marker is defined by a 7×7 square array. Horizontal and vertical borders are formed by black squares. All other squares within the array may carry a black or white colour, where the arrangement of black and white interior squares encode a binary pattern. Each ArUco marker has a unique pattern, which can be used to identify the marker. Thus the digital coding theory system used for detecting these markers proves to be a robust and accurate system producing a low rate of false marker detection. Additionally, the layout of the four corners can be used to identify the orientation of the marker. With this encoded information, the marker can be used robustly to estimate the 3D position and orientation of the marker relative to a monocular camera. Title Suppressed Due to Excessive Length The camera can be used to detect and obtain the position of the ArUco vision marker as described below (see Fig. 5). When the image is captured by a camera, it is converted into a grey-scale image. Afterwards, the most prominent contours in the image are detected through the use of a Canny edge detector [39], an efficient edge detection algorithm that provides a binary image containing the border information. The Suzuki algorithm [40] is then used to extract the contours, which are reconstructed by the Douglas-Peucker algorithm [41]. Here contours that do not contain four vertexes or lie too close together are discarded. After these image processing steps, the encoding of the marker is extracted and analysed. To achieve this, the perspective view of the marker must be projected onto a 2D plane. This is achieved through the use of a homography. Otsu's method [42] was applied using an optimal image threshold value to generate a binarized image. This results in a grid representation of the marker where each cell is assigned a binary value, determined by the average binary value of each pixel belonging to the cell. For example, a value of 1 is assigned if the majority of binarized pixels in the cell possess a value of 1. Cells belonging to the border of the image carry a value of 0, while all inner cells are analysed to obtain the internal encoding, which corresponds to a 6x6 internal grid area. To improve the accuracy of the marker detection, the corners of the marker are refined through subpixel interpolation. Finally, the pose of the camera is estimated by iteratively minimizing the reprojection error of the corners using the Levenberg-Marquardt algorithm [43]. As shown in Fig. 6, the material is placed on the x direction of the marker. Using dcc to denote the distance between the centre of the material and the marker (assumed to be known a priority), (x mar , y mar ) to denote the marker position, and Θ to denote the marker orientation in the x-y plane. The relative position of the material can be calculated by: x mat = x mar + d cc cos(Θ ) y mat = y mar + d cc sin(Θ ) Where the final position (x mat , y mat ) corresponds to the x and y positions of the material centroid. Using this approach, the position of the material can be determined robustly regardless of the size and shape of the material. Once the position of the material is detected, target commands are sent to the robot to move the end-effector above this centre position. Additionally, there is no restriction for the size and shape of the material as long as the centre of the fabric is fixed. Thus, this localisation approach is suitable for handling different size and shape of fabric patches. Fibre orientation detection approach The orientation of the composite material during placement on a mould must be carefully controlled in a composite lay-up process. This is due to the material being anisotropic, meaning it provides varying strength along different directions across the material. In order to make sure that the plies are layered as designed, strict requirements are imposed for the orientation of each layer of fibres to obtain the expected composite parts. The Fourier Transform [44] is a popular image processing tool that has proven to be effective for a variety of image processing applications such as image enhancement and image compression. In this work, the Fourier Transform is applied for fibre orientation analysis, where an image is converted into the frequency domain to obtain its spatial frequency components. The transformed image can be calculated by: Where µ and ν are spatial frequencies. In order to robustly detect the orientation of fibres from an image using the Fourier transform, a high gradient image possessing strong directional change in intensity must be acquired. This is achieved through the use of a spotlight mounted together with the camera to produce strong reflections from the fibres of the material. The detection process is shown in Fig. 7. An image captured by the camera is first converted to a grey-scale image. Then the Fourier transform is applied to obtain the frequency domain image. A series of morphological procedures are applied to generate several discrete points that lie along the line in the direction of the fibres. The centre of these points can be analysed by contour detection. Finally, a fitted straight line for this set of points is computed and the orientation of this line is calculated according to its slope. Here curve fitting is achieved through the use of the least squares line fitting method. Assume that the points obtained from the morphological procedures are (x 1 , y 1 ) , ..., (x n , y n ), and the fitted straight line equation is y i = ax i + b. The process for curve fitting is to identify appropriate values for (a, b) that minimises the total square error E: The above equation can be re-expressed as: where The explicit expression of E as a function of Euclidean vector norm is: According to the stationary condition of E with respects to B requires which leads to the stationary point Thus the original equation can be represented by a least squares solution, B = [X T X] −1 X T Y given that [X T X] −1 exists which depends on the data collection. The values for (a, b) are obtained by: This provides the fitted line, y = ax + b. Using the computed gradient of the line a, the orientation can be calculated by: Θ = arctan(a), where Θ is the fibre orientation angle in x-y plane taking x-axis as reference position. Therefore, as long as the relative orientation of fibres and yarn is known, the fibre orientation detection could be adapted to different kinds of prepregs. The prepreg used in this work is carbon fiber reinforced polymer (CFRP) composites. The relative orientation of fibres and sewing yarn is known in advance, which is at 90 • . Experimental Setup The composite material used here is a small sheet of fabric prepreg. Both of the simulation-based and physical experiments are described, where the specific robotic layout and designed tools are defined. Robot setup In this paper, the Turtlebot3 Burger differential drive mobile robot was chosen as the mobile robot platform in both simulation and physical experiments due to the unavailability of industry-standard mobile robots. The robot setup in the simulation environment is presented in Fig. 8. As the Turtlebot3 Burger is an open source mobile robot, low-level access to the robot's individual functionalities is possible, providing easy access to wheel odometry-based readings that can be sent to the host PC through a ROS network. For the fixed-base robotic manipulator, the 6 degrees of freedom KUKA KR90 R3100 industrial manipulator was chosen for implementation in the simulation environment to model a realistic industrial environment. In physical experiments, a 6 degrees of freedom KUKA KR6 R900 manipulator was used due to its lower scale and availability. Nevertheless, both robots share the same control scheme, allowing algorithms to transfer without modification between the two systems. Machine vision system design The machine vision system comprises a commercial low-cost webcam, a spotlight and a customised camera mounting unit. Localisation of the material and fibre orientation detection are achieved through the use of a spotlight mounted together with the camera to produce strong reflections from the fibres of the material. In order to attach the camera to the end-effector of the fixed-base manipulator and ensure that the camera is orthogonal to the material plane, a camera mounting unit was designed by CAD (Computer-aided design) software and then 3D printed. The CAD design and mounted 3D-printed piece is presented in Fig. 9. During physical experiments, the camera was inserted into the holder facing downwards, while the spotlight was attached to the external surface of the mounting unit facing in the same direction as the camera. Host computer and related software Following the description of the proposed Co-MRMS in Section 2.1, Matlab, Cop-peliaSim and OpenCV have been used to support the development of crucial robotic capabilities for this work. The path planning routine for the mobile robot, based upon a bi-directional variant of the Rapidly-exploring Random Tree algorithm [45], was implemented on Matlab. Likewise, the planning of manipulator actions for grasping was developed in Matlab, where reasoning is applied on sensory information to identify target positions and complete motions for the end-effector. The robot was actuated using Point-To-Point movement. A remote API library, developed by Coppelia Robotics, was used to send resulting actuation commands from Matlab to the simulated robots in CoppeliaSim. CoppeliaSim provides an extensive environment for the development of the integrated simulation. In addition, the deformable object was modelled in CoppeliaSim by leveraging its support for the simulation of dynamic behaviours, which is achieved through the Bullet 2.78 physics engine. This handles the complex calculation of composite material deformation during handling operations and enables the visualisation of the material's deformation behaviours. The integrated simulation environment is presented in Fig. 10, which comprises of the mobile robot, the fixed robotic manipulator, the composite material, a work surface, a cube-shaped mould and the mounted camera. For the physical implementation, the ITRA toolbox [46], developed for the control of KUKA robots, provided the interface for directly sending actuation commands from Matlab to the KUKA robot controller unit for manipulator control, while ROS provided the interface for the actuation of the Turtlebot3 Burger. The vision system relied upon images captured by a webcam mounted on the end-effector of the manipulator to observe the environment. Then the OpenCV library was used for the development of machine vision algorithms that processed images obtained by the camera. Performance Evaluation To validate the developed system, several experiments were conducted to test the capabilities of the Co-MRMS. Initially, simulation-based experiments were carried out according to the proposed approaches in Section 2 and the accuracy of the vision system was assessed. Subsequently, physical experiments were conducted on an integrated robotic system to validate the combined behaviour of the proposed Co-MRMS and assess the accuracy of machine vision algorithms in real-world scenes. Simulation-based experiments The Co-MRMS, which employs a KUKA KR90 R3100 industrial fixed-base manipulator and a Turtlebot3 Burger differential drive mobile robot, was firstly modelled in CoppeliaSim to verify the performance in fulfilling the transportation and lay-up task of the proposed system. Additionally, an integrated camera and a gripper unit with four suction cups were modelled on the KUKA KR90 end-effector so that the detection and grasping of the material could be simulated. Based on the modelled Co-MRMS, two simulation-based experiments were conducted to evaluate the attainable accuracy of the composite material vision system. First, experiment of evaluating localisation accuracy was assessed. Using the modified bi-directional RRT algorithm [45] to compute a collision-free path, the mobile robot drove autonomously to a randomly generated goal within the manipulator workspace. Then, the vision system was employed to correct the simulated error in the wheel odometry-based positioning system by applying the object localisation algorithm described in Section 2.3.1. To evaluate the repeatability of the localisation results, this experiment was conducted 10 times. In addition, to simulate the accumulation of error in wheel odometry observed in real environments, Gaussian noise was introduced and superimposed with the simulated wheel odometry measurement of the mobile robot's position relative to its starting position. Gaussian noise has generally been used in signal processing to deal with uncorrelated random noise and is also commonly adopted in neural networks for modelling uncertainties [47]. It is statistically defined by a probability density function (PDF) that is equivalent to a normal distribution (also known as Gaussian distribution). In other words, the odometry error due to wheel slippage was assumed to be Gaussian-distributed. Setting the mean and standard deviation of the Gaussian distribution to 100mm and 70mm, respectively, the wheel odometry position error in x and y are given by: where µ is the Gaussian mean and σ is the standard deviation. With the material position data obtained from machine vision system and wheel odometry, the localisation accuracy could be evaluated through Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). Here the ground truth was retrieved from the simulation. The results are presented in Table 2, where MAE and RMSE of wheel odometry were 158.48 mm and 121.21 mm, respectively, while the MAE and RMSE of the vision system were 11.53 mm and 9.00 mm, respectively. Compared to the wheel odometrybased estimation, the proposed machine vision system reduced the localisation error by 93% and demonstrated its ability in improving the localisation accuracy. In the second experiment, the fibre orientation detection algorithm was evaluated by comparing the output of the algorithm against the ground truth. Here the orientation of the material was incremented by 10 degrees between the range of [0 • , 180 • ] relative to the camera frame. Like before, the accuracy is expressed by the MAE and RMSE and are shown in Table 2. The MAE and RMSE for fibre orientation detection were found to be 0.70 • and 0.048 • , respectively. Since the experiments were conducted in a simulation environment, the lighting conditions in the scene could be controlled, which shows that under ideal conditions, the fibre orientation detection algorithm can provide accurate estimates. System interaction behaviour evaluation The cooperative system interaction behaviour was evaluated by physical experiments, of which a set of execution routines consisting of five active phases and two idle phases were obtained. This corresponds to the complete performance with a duration of approximately 87 seconds, involving approximately 18 seconds idle pauses time. Fig. 11 plots the time evolution of the x and y positions of the mobile robot (odom x and odom y , respectively), and the x, y and z positions of the manipulator end-effector (kuka x , kuka y , and kuka z , respectively) accross these execution phases recorded from a single trial of the experiment. The first phase consists of the autonomous drive of the mobile robot. The duration of this phase varies according to the start point, goal point and the subsequent path to move between these two points. After the mobile robot arrives at the goal point, it remains stationary to await the machine vision processing phase. This corresponds to a flat curve from the end of phase 1 for odom x and odom y in Fig. 11. After a brief pause where all systems remain idle to indicate that the mobile robot has reached its destination, the host PC sends the wheel odometry estimation of the mobile robot's position as a target command to drive the manipulator towards the approximate location of the material (phase 2). Here the build-up error in the estimated position arising from wheel slippage causes a misalignment between the centre of the composite material (carried by the mobile robot) and the end-effector of the manipulator. Once the manipulator reaches the target position, both robots remain stationary as the vision system captures an image and runs the localisation algorithm to compute a higher accuracy estimate of the mobile robot's true position. Phase 3 then consists of refining the position of the end-effector using the visionbased estimate of the mobile robot position to reduce the misalignment between the manipulator and composite material. In the fourth phase, the manipulator lowers the z position of the end-effector from 420 mm to 250 mm (relative to the base frame of the manipulator, which is treated as the world coordinate frame) to provide the camera with a close-up view of the composite material. This is necessary to ensure a satisfactory image can be obtained for accurate fibre orientation detection. In the final phase, machine vision parameters are adjusted for the new image depth and the fibre orientation angle of the material is computed using the algorithm described in Section 2. This information is used to rotate the end-effector to correct for the angular offset between the end-effector and composite material. This facilitates the placement of the material in a controlled orientation during grasping operations by ensuring that the fibre direction is always aligned with the z axis rotation of the end-effector. This experiment demonstrated the capability of the integrated system to correct any manipulator positional offset error that arises from wheel slippage of the mobile robot through higher accuracy estimation provided by machine vision. Compared to the wheel odometry-based localisation, the vision system corrected the position in the x and y direction by 156.87mm and 23.17mm in this case. Machine vision system accuracy evaluation To measure the accuracy of the machine vision algorithms in the real world, two additional experiments were conducted. The first experiment was used to quantify the errors in the measured position of the mobile robot using the vision-based localisation algorithm and wheel odometry. The setup for the experiment is shown in Fig. 9, where the camera and spotlight are mounted on the end-effector of the KUKA robot positioned above the Turtlebot3 Burger platform. The mobile robot was driven autonomously to a randomly generated goal within the workspace of the manipulator and the wheel odometry-based position reading was obtained. The fixed-base manipulator was then manually controlled to align the end-effector directly above the centroid of the composite material. The feedback position information of the end-effector was obtained from the KUKA controller and used as the ground truth in this experiment. Finally, the vision-based estimate of the material position was obtained by applying the localisation algorithm with both robots fixed. This experiment was conducted 20 times for statistical significance. For this reason, the mobile robot drove autonomously to a randomly generated goal within the workspace of the manipulator, of which the travelled distances were different each time. The average travelled distance of the mobile robot was 340.9mm. Table 3 reports the MAE and RMSE for both wheel odometry estimation and visionbased estimation relative to the ground truth. It was found that the MAE and RMSE for wheel odometry was 24.72 mm and 19.88 mm, respectively. This was much larger than the MAE and RMSE for vision-based localisation, which was 4.75 mm and 4.04 mm respectively. Evidently, machine vision reduced the wheel odometry-based error by 80%, which significantly improves the accuracy for localisation when used in conjunction with wheel odometry. A second experiment was conducted to quantify the accuracy of machine vision for fibre orientation detection. Like the first experiment, a camera and flashlight were mounted on the end-effector of manipulator. A sample piece of composite material was placed in a fixed position in the workspace of the manipulator while the endeffector was positioned directly above the centre of the material with their rotation axes aligned at 0 • . The orientation of the end-effector about the z axis was incrementally increased by 10 • within the range of [0 • , 180 • ]. At each interval the fibre orientation detection algorithm was used to measure the orientation angle of the fibre relative to the camera, which should coincide with the rotation angle of the endeffector under ideal conditions. Thus the measured angle was compared against the end-effector rotation, used as the ground truth, to compute the MAE and RMSE. As shown in Table 3, the MAE and RMSE for fibre orientation detection were 5.11 • and 5.73 • , respectively. Furthermore, the error in fibre orientation detection in the real world was greater than the simulation results as the composite material was modelled as a non-rigid body, of which the optical features (high specular reflectivity and high absorption of light) of the material were not simulated and the illumination environment in the real world is far more challenging than the simulated environment. Moreover, the alignment between the camera and the normal of the material was not exact in the physical setup, which introduces additional projection errors when detecting the orientation of the fibre as shown in Fig. 12. The investigation shows that the closer the true fibre orientation is to 0 • , the higher the accuracy in fibre orientation detection. This could be overcome by applying a two-step detection strategy as follows. The first step consists of computing an approximate rotation angle for the end-effector to roughly align the camera with the fibre orientation which corresponds to the zero degrees region. Subsequently, a finer tuning on the end-effector rotation is performed by applying a second instance of the fibre orientation detection algorithm, which produces an estimate for the fibre orientation angle with minimal error. The strategy was evaluated in physical experiments. As expected, the performance yielded greater accuracy in fibre orientation detection. The error of the vision system was reduced to 0.23 degree approximately. In comparison, the detection error was around or below 1 degrees by derived fibre reflection model in [22] and the frequency domain machine vision algorithm in [48] showed around 5 degrees error for braid angle measurement. This indicated that the proposed machine vision system with two-step strategy can achieve high accuracy in fibre orientation detection. In addition, the systematic error was approximately 1.84 degrees due to the nonalignment between the camera and fibre orientation. Nevertheless, the system is capable of meeting the high accuracy orientation detection requirements in composite material manufacturing. Discussions This section discusses the obtained experimental results. Firstly, it should be noted that the simulated trials incorporating the manipulation actions for grasping the material has so far not been included in the physical trials due to the lack of vacuum suction hardware. Nevertheless, in both the simulation-based and physical experiments, autonomous material transportation, localisation and fibre orientation detection capabilities were achieved, and it has been shown that the performance of the system in simulation was highly consistent with experimental results from physical trials. Future work will seek to integrate a vacuum gripper with the existing physical system to further develop and evaluate material handling capabilities. Secondly, it could be observed that the mobile robot used in this work was not of an industrial standard. Instead, the educational mobile robot platform Turtlebot 3 Burger was adopted for the investigations conducted. This meant experiments and evaluations were limited to small-scale setups due to the small size of the Turtlebot3 platform. Thus, additional development work is necessary to implement the proposed system framework onto an industrial standard set of hardware to validate the proposed system. This work has so far focused on the detection and handling of a single sheet of material. Current ongoing work is investigating the lay-up task with the aim of developing an algorithm for autonomous lay-up of composite materials -i.e. working with multiple plies. Another interesting avenue to examine is the feasibility of developing a method to correct any creases or poor contacts between composite material and the mould in the composite draping process through the use of the manipulator(s), which can maximise the quality of the draping process when performed autonomously using a cooperative robotic system. Conclusions In this study, a cooperative mobile robot and manipulator system (Co-MRMS), which comprised of a fixed-base manipulator, an autonomous mobile robot and a machine vision sub-system, was developed as a promising strategy for autonomous material transfer and handling tasks to advance composite manufacturing. To demonstrate the feasibility and effectiveness of the proposed Co-MRMS, comprehensive simulations and physical experiments have been conducted. The integrated simulation, developed in CoppeliaSim, simulated a material transfer operation that involves the use of a mobile robot to transport composite material to a robotic manipulator, which grasps and transfers the material to a mould. To realistically simulate the interactions between the robots and the non-rigid nature of composite materials, a method for modelling deformable material within CoppeliaSim has been devised. Physical experiments were performed to evaluate the performance of individual components of the proposed Co-MRMS through a small-scale robotic cell consisting of a KUKA KR6 R900 manipulator and the Turtlebot3 Burger mobile robot. An effective machine vision system has been developed to support the robotic tasks described above by providing the capabilities for object detection, localisation and fibre orientation detection. When compared to the estimation achieved using wheel odometry, the proposed machine vision system reduced the localisation error by 93% and 80% in simulation-based and physical experiments relatively. Future work will focus on validating the proposed system on industrial standard platforms and improving the system e.g. integrating a vacuum gripper, quantifying system efficiency, extending the work to multiple plies and developing a method for draping correction. In conclusion, by exploiting the availability of wheel odometry and integrating this with machine vision algorithms within the proposed Co-MRMS, it is possible to implement a flexible system that provides autonomous material transportation and sufficiently-accurate material handling capabilities that extend beyond what is currently adopted in the industry. Figure 1 The framework of the Co-MRMS. Figure 2 Modelling the non-rigid nature of composite material as an array of dynamically-linked cuboids.
2021-07-26T00:06:09.355Z
2021-06-08T00:00:00.000
{ "year": 2021, "sha1": "530441d7bda204291db4fda48908efebae9d0be6", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00170-021-08342-2.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "8b35b61b7f60846c4676268cd424d3e5478aafe5", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
199517875
pes2o/s2orc
v3-fos-license
Statistical modeling of surveillance data to identify correlates of urban malaria risk: A population-based study in the Amazon Basin Despite the recent malaria burden reduction in the Americas, focal transmission persists across the Amazon Basin. Timely analysis of surveillance data is crucial to characterize high-risk individuals and households for better targeting of regional elimination efforts. Here we analyzed 5,480 records of laboratory-confirmed clinical malaria episodes combined with demographic and socioeconomic information to identify risk factors for elevated malaria incidence in Mâncio Lima, the main urban transmission hotspot of Brazil. Overdispersed malaria count data clustered into households were fitted with random-effects zero-inflated negative binomial regression models. Random-effect predictors were used to characterize the spatial heterogeneity in malaria risk at the household level. Adult males were identified as the population stratum at greatest risk, likely due to increased occupational exposure away of the town. However, poor housing and residence in the less urbanized periphery of the town were also found to be key predictors of malaria risk, consistent with a substantial local transmission. Two thirds of the 8,878 urban residents remained uninfected after 23,975 person-years of follow-up. Importantly, we estimated that nearly 14% of them, mostly children and older adults living in the central urban hub, were free of malaria risk, being either unexposed, naturally unsusceptible, or immune to infection. We conclude that statistical modeling of routinely collected, but often neglected, malaria surveillance data can be explored to characterize drivers of transmission heterogeneity at the community level and provide evidence for the rational deployment of control interventions. Introduction Malaria continues to be a major cause of morbidity and mortality in sub-Saharan Africa, South and Southeast Asia, Oceania, and Latin America, with 219 million cases and 435,000 deaths worldwide in 2017 [1]. The disease typically affects the rural poor, since urbanization tends to reduce malaria risk through improved housing, greater access to health services, and environmental changes that may limit vector abundance [2]. Indeed, malaria rates are typically lower in cities, compared to their rural surroundings, in most [3,4], although not all [5], Valley, next to the Brazil-Peru border. With 0.5% of the Amazon's population, the region accounts for 18.5% of the country's malaria burden, estimated at 157,000 cases in 2016 [1]. A large proportion of infections in Juruá Valley are reportedly acquired in urban settings-up to 45% in the municipality of Mâncio Lima, compared with the country's average of 17% in 2013 [28]. Here, we characterize high-risk individuals and households by applying RE-ZINB regression analysis to overdispersed and household-clustered surveillance data. Our findings may allow for better targeting of interventions in the main malaria hotspot of Brazil. Ethics statement The study protocol was approved by the Institutional Review Board of the Institute of Biomedical Sciences, University of São Paulo, Brazil (CEPH-ICB 1368/17); written informed consent and assent were obtained. Study area and population The municipality of Mâncio Lima covers a surface area of 4,672 km 2 in northwestern Brazil (S1 Fig) and comprises a single town next to the Japiim river, where nearly half of its 17,545 inhabitants reside. Streams, wetlands rich in moriche palm trees, and fish farming ponds are widespread in the town. With a typical equatorial humid climate, Mâncio Lima receives most rainfall between November and April, but malaria transmission occurs year-round. The annual parasite incidence, estimated at 436.4 cases per 1,000 inhabitants in 2016, is the highest for a municipality in Brazil [29]. Local distribution of long-lasting insecticidal bed nets (LLINs) and indoor residual spraying (IRS) with pyrethroids or propoxur are currently limited to high-risk households. The primary local malaria vector is An. darlingi, but An. albitarsis s.l. is also abundant across the town, especially in fish farming ponds [22,30]. The study population comprised all permanent residents in the town of Mâncio Lima enumerated by a census survey between November 2015 and April 2016. During the survey, dwellings were geo-localized and a questionnaire was applied to collect demographic, health, behavioral, and socioeconomic data. Principal component analysis was used to compute an assets-based wealth index for each household [31]. Malaria surveillance and treatment The study outcome was laboratory-confirmed malaria, defined as any episode of parasitemia, irrespective of parasite density or symptoms, diagnosed through active or passive case detection from 1 January 2014 through 30 September 2016. We retrieved malaria case records from the electronic malaria notification system of the Ministry of Health of Brazil (http://200.214. 130.44/sivep_malaria/). Because malaria is a notifiable disease in Brazil and diagnostic testing and treatment are not available outside the network of government-run health care facilities, the database comprises the vast majority of malaria episodes confirmed by thick-smear microscopy in Mâncio Lima residents over the study period (33 months). According to a recent estimate, the electronic malaria notification system comprises 99.6% of all clinical malaria cases diagnosed countrywide [32]. At least 100 fields are routinely examined for malaria parasites under 1000× magnification by experienced local microscopists before a slide is declared negative. Partially supervised chloroquine-primaquine and artemether-lumefantrine regimes were administered to treat Plasmodium vivax and P. falciparum malaria, respectively [33]. A minimal interval of 28 days between two consecutive episodes was required to count the second episode as a new malaria infection; when different species were diagnosed <28 days apart, a single mixed-species infection was counted. Statistical methods The R package gamlss [34] was used for statistical analysis (R Foundation for Statistical Computing, Vienna, Austria). The generalized additive models for location, scale and shape (GAMLSS) approach [35] was used to fit ZINB [10,24] distribution functions to malaria counts and to choose the best-fitting model. We note that the term "additive" refers to the option, provided by the GAMLSS approach but not applied here, to include nonparametric components into the linear predictors of the models. We used randomized normal quantile-quantile (Q-Q) plots and detrended normal Q-Q plots, known as worm plots, as diagnostic tools to analyze residuals [36]. Individual-and household-level explanatory variables were added to the count component of the first standard ZINB regression model. The individual-level variables entered in the multivariable models were: age (stratified as 0 [birth]-5, 6-15, 16-40, 41-60, and >60 years); sex (female vs. male); reported bed net use, either insecticide-impregnated or not, the previous night (no vs. yes); sleeping time (before 10 pm, between 10 and 11 pm, after 11 pm); and wakeup time (before 7 am, between 7 and 8 am, after 8 am). Household-level variables were: household size (<5 vs. �5 people); wealth index (stratified into terciles); LLIN available in the household (no, yes, unknown); IRS within the past three years (no, yes, unknown); and housing quality indicators such as incomplete walls and ceiling (no vs. yes), presence of screens in doors and windows (no vs. yes), and type of lavatory (indoors vs. outhouse). We used the R package GoodmanKruskal to identify significant pairwise associations between model covariates; none was found (S2 File). The multivariable model was adjusted for the covariate "followup duration", the number of person-years at risk contributed by each study participant. This was calculated for the period between the date of birth or 1 January 2014, whichever was the most recent, and 30 September 2016, when the follow-up ended. Next, to account for clustering of observations into households, household-level RE terms were also considered into the multivariable ZINB regression. Worm plot diagnostic of the RE-ZINB model indicated too large fitted variance, with many data points lying outside the 95% confidence intervals (CI) of the expected deviation. To reach satisfactory model diagnostics, we shrunk the random-effects distribution toward the overall mean [37] by decreasing the degrees of freedom originally estimated by the model to 150; further details are provided in S1 File. We next used the random-effect predictors to characterize the spatial heterogeneity in malaria risk while controlling for potential confounders [26]. The high (low) magnitude of household random-effects predictors was used to select households with higher (lower) than average malaria incidence density. We examined the spatial distribution of households with the top 5% and bottom 5% random-effects predictors of the RE-ZINB models (here termed "hot houses" and "cold houses", respectively) by mapping their GPS coordinates. Given the results of the spatial analysis described above, we tested whether model fitting could be further improved by including a variable describing subjects' zone of residence, whether in the center ("urban hub") or in the less-urbanized periphery of the town, close to the most vegetated areas. To this end, geo-localized houses were classified as centrally or peripherally situated using the computational approach described in S3 File. We next used the Akaike information criterion (AIC) to compare the quality of RE-ZINB models with and without the covariate "zone of residence". To further characterize study participants at no risk of malaria [10], we built additional RE-ZINB models with the following variables added to the structural zero component: zone of residence, age, sex, and follow-up duration. The following variables were initially entered in the count component: age, sex, bed net use, follow-up duration, zone of residence, household size, LLIN availability, recent IRS, presence of complete walls, and type of lavatory. The best RE-ZINB models were selected using the strategy stepGAICALL.A() proposed by Stasinopoulos and colleagues [34] with the following steps: (a) an initial NB model was built for the count component (forward approach); (b) given this model, a model was built for the logit component (forward approach); (c) given the NB and logit models, we checked whether the terms for the logit model were needed using backward elimination; (d) given the NB and logit models, we checked whether the terms for the NB model were needed (backward elimination). The generalized AIC (GAIC) was used for model comparison. Results The study comprised 8,878 subjects with ages ranging between <1 month and 105 years (mean, 27.0 years) distributed into 2,329 households. They experienced a total of 5,480 laboratory-confirmed malaria episodes over 23,975.3 person-years of follow-up, with an overall malaria incidence density estimated at 22.6 cases per 100 person-years at risk. Plasmodium vivax accounted for 84.2% of the episodes (incidence density, 19.0 cases per 100 person-years at risk); 14.4% of the infections were due to P. falciparum, (incidence density, 3.2 cases per 100 person-years at risk), and 1.4% due to both species. The incidence densities were lowest among under-five children and over-sixty adults (Fig 1A), mostly due to the age-related variation in P. vivax incidence (Fig 1B). This age-incidence pattern likely reflects the combined effect of differential exposure and acquired immunity across age groups. Male adults aged 16-60 years were more often infected than their female counterparts (Fig 1A), consistent with increased occupational exposure. Statistical model fitting The frequency distribution of malaria cases was overdispersed, with a mean of 0.62 (range, 0 to 12; variance, 1.4) episodes per person. The vast majority (67.4%) of study participants remained free of malaria and less than 1% of them had six or more repeated episodes during the follow-up. Empirical frequency distribution data were properly fitted with ZINB distributions (Fig 2). We analyzed data from 8,431 individuals (447 were excluded due to missing values in key variables) and the RE-ZINB count regression model obtained comprises the explanatory variables listed in S1 Table. RE-ZINB regression analysis estimated that 13.6% (95% CI, 5.1-31.3%) of the study participants (roughly 1,200 residents) were intrinsically free of malaria risk and accounted for the excess zero counts beyond the NB expectations. We next examined the spatial distribution of "hot houses" and "cold houses". These were defined as the households within the top 5% (hot houses) and the bottom 5% (cold houses) estimates of random-effects predictors for the count compartment of the RE-ZINB regression model, adjusted for all explanatory variables shown in S1 Table. We show that most hot houses are indeed situated in the periphery of the town (Fig 3) and, therefore, geo-localized houses were classified as centrally or peripherally situated using the computational method described in S3 File. The covariate indicating the zone of residence (whether in the center or in the lessurbanized periphery of the town) was introduced to the regression and the RE-ZINB model fitting was improved (Table 1). These results further indicate that households in the less-urbanized periphery of the town, surrounded by more densely vegetated areas, constitute the priority target for spatial interventions aimed to reduce local malaria transmission. Study households with lower-than-average ("cold houses") and higher-than-average malaria incidence ("hot houses") were identified using the random-effect predictors from the zero-inflated negative binomial (RE-ZINB) model. Red dots show "hot houses" with the top 5% random-effect predictors and blue dots show "cold houses" with the bottom 5% randomeffect predictors of RE-ZINB model; all other households are represented as grey dots. Vegetated areas (data retrieved from Brazilian Institute for Space Research (2018) PRODES Project, http://www.inpe.br/cra/projetos_pesquisas/terraclass2014.php.) are shown in green and roads Table 2 shows independent associations between explanatory variables and malaria incidence density revealed by the best-fitting multivariable ZINB regression model with RE estimators, which include zone of residence as a covariate. We note that the count compartment of the ZINB model allows for identifying predictors of malaria incidence density in the at-risk fraction (86.4%) of the population. Age between 6 and 60 years, male sex, residence in the lessurbanized periphery, and indicators of poor housing quality were key predictors of increased malaria incidence density in the community ( Table 2). It is not surprising that LLIN availability in the household, reported bed net use, and recent IRS were all positively associated with malaria incidence density, given that households perceived to be at increased malaria risk are selectively targeted for LLIN distribution and IRS. Predictors of malaria incidence density To further characterize high-risk study participants, we tested whether their increased malaria incidence density was due to larger proportions of subjects experiencing at least one malaria episode or to an increased number of repeated malaria episodes (that may include parasite recrudescences and relapses in addition to new infections) among those who had malaria episodes recorded during the study. We found that both factors contribute to the increased malaria incidence density observed in high-risk population strata. Indeed, 742 (42.5%) of 1,746 male study participants aged 16-40 years, but only 2,020 (30.2%) of the remaining 6,685 study participants, had at least one malaria episode during the 33-month follow-up (P < 0.0001, χ 2 = 94.78, 1 degree of freedom). Moreover, 1,263 (40.3%) of 3,135 study participants living in the periphery of Mâncio Lima, compared to 1,499 (28.3%) of the 5,296 individuals living in the central area of the town, experienced at least one malaria episode during the follow-up (P < 0.0001, χ 2 = 128.36). However, once infected high-risk subjects were also more likely to have repeated malaria episodes during the follow-up. In fact, the frequency distributions of malaria episodes in male study participants aged 16-40 years and those living in the periphery were significantly shifted to the right, compared to their respective counterparts (S2 Fig.). Not-at-risk subjects The not-at-risk fraction of the population described by the structural zero compartment of the RE-ZINB model may be either unexposed, naturally unsusceptible to infection, or may have acquired immunity over time. Because our explanatory variables did not directly measure natural susceptibility or acquired immunity, we focus further analyses on age, sex and zone of residence as proxies of exposure. These variables were added to the logistic (structural zero) component of the RE-ZINB model, which was further adjusted for follow-up duration (personyears at risk). The best-fitting RE-ZINB regression model revealed a negative association of age between 16 and 40 years (but not sex) and residence in the periphery of the town with the odds of being a structural zero. Interestingly age > 60 years (a proxy of cumulative exposure and acquired immunity) remained as a significant predictor of decreased malaria incidence density, but not of being a structural zero (Table 3). This indicates that age-related and spatial differences in malaria exposure, rather than acquired immunity, can account, at least in part, for the presence of not-at-risk subjects in the community. Overall, associations between covariates and malaria incidence density identified by the NB compartment of the RE-ZINB model that also included covariates in the logit compartment (Table 3) were similar to those identified by the RE-ZINB model with an empty (i.e., no covariates added) logit compartment (Table 2). Discussion The long-standing consensus that malaria transmission is spatially heterogeneous provides the basis for targeting control interventions in elimination settings [38,39]. Residual malaria transmission clusters at different spatial scales, from regions to households [40][41][42], with specific high-risk groups termed "hot-pops" being disproportionally affected [40]. Identifying hotpops is a top priority of malaria elimination programs. Here, we examine the drivers of small-area variation in malaria rates in the main urban hotspot in Brazil by fitting multivariable RE-ZINB regression models to community-wide surveillance data. We show that RE-ZINB models can: (i) properly fit overdispersed malaria count data and identify hot pops, (ii) characterize spatial heterogeneity in malaria risk while controlling for potential confounders and identify hot houses, and (iii) characterize the not-at-risk fraction of the population. Results suggest both imported and locally acquired infections contribute to the malaria burden in the study population. Each entails different malaria control interventions. We hypothesize that increased occupational exposure characterizes the main malaria hot-pop in the community, comprised of adult male residents often engaged in subsistence farming in nearby settlements [43]. These subjects may serve as a source of new parasite strains continuously introduced in the town, being the main targets of interventions to reduce malaria importation. Control measures may include deploying periodic malaria screening and treatment, as well as LLINs, to the most mobile subjects in the community. Conversely, the RE-ZINB model estimates that 14% of the study participants comprises the not-at-risk fraction of the population. This relatively large fraction of the urban population is mostly comprised of children and older adults living in the central urban hub who will remain uninfected regardless of any malaria control measure. Local transmission also appears to contribute to malaria risk, especially in the less-urbanized periphery. We confirm that better housing is associated with reduced malaria incidence [44,45] even in an endemic setting dominated by vectors that feed and rest predominantly outdoors [46]. Interestingly, hot houses identified by the analysis of random-effects predictors of the RE-ZINB regression model tend to be peripherally located, but they do not form clear, easily detectable clusters. Importantly, the fraction of study participants residing along the town boundaries (37% of the total) appear to be at increased risk after controlling for potential confounders, indicating that the association between place of residence and malaria risk is mostly spatial, and is not severely confounded by age, sex, and housing quality differences among households. These findings are consistent with focal malaria transmission across the urban-rural transition in the periphery of the town [43]. Control measures to reduce local malaria transmission include, among others, IRS and LLIN distribution targeted at hot houses. Moreover, large-scale screening of windows and Table 3 other house openings may represent a valuable measure to render high-risk hot houses mosquitoproof, as suggested by recent data from urban Africa [47]. The present study has some limitations. First, surveillance data were retrieved retrospectively from a case notification database and no blood samples were available for further confirmatory (e.g., molecular) diagnostic tests. We assume that nearly all malaria episodes diagnosed by microscopy and treated in study participants were retrieved [32], but routine surveillance overlooks transient sub-microscopic parasitemias that do not develop into detectable infections but remain infectious to mosquitoes [48]. Therefore, risk factors described for microscopy-positive malaria in the community are not necessarily the same for sub-microscopic, often asymptomatic infections. Next, surveillance data comprises cases diagnosed by both passive and active case detection, but our data set does not allow for distinguishing between casefinding strategies. Moreover, analyses of passively detected cases are prone to biases due to variation in access to health facilities and health-seeking behavior, even in relatively compact urban areas where health facilities are readily accessible and provide care at no cost. Finally, the infrequency of P. falciparum malaria precludes further between-species comparisons of risk factors in the study population. Conclusion We conclude that both local transmission and imported cases from rural and/or forest areas are responsible for the maintenance of malaria in the urban setting of Mâncio Lima. Large sets of routinely collected surveillance data linked to additional demographic and socioeconomic information can be explored for evidence-based planning and deployment of malaria control interventions.
2019-08-11T13:03:24.967Z
2019-08-09T00:00:00.000
{ "year": 2019, "sha1": "d5f6135231665e3d28c5a682c8a5ea8a66e2ec90", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0220980&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5f6135231665e3d28c5a682c8a5ea8a66e2ec90", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
17220528
pes2o/s2orc
v3-fos-license
High resolution optical and near-IR imaging of the quadruple quasar RX J0911.4+0551 We report the detection of four images in the recently discovered lensed QSO RX J0911.4+0551. With a maximum angular separation of 3.1", it is the quadruply imaged QSO with the widest known angular separation. Raw and deconvolved data reveal an elongated lens galaxy. The observed reddening in at least two of the four QSO images suggests differential extinction by this lensing galaxy. We show that both an ellipticity of the galaxy (epsilon_{min}=0.075) and an external shear (gamma_{min}=0.15) from a nearby mass has to be included in the lensing potential in order to reproduce the complex geometry observed in RX J0911.4+0551. A possible galaxy cluster is detected about 38", from RX J0911.4+0551 and could contribute to the X-ray emission observed by ROSAT in this field. The color of these galaxies indicates a plausible redshift in the range of 0.6-0.8. Introduction RX J0911.4+0551, an AGN candidate selected from the ROSAT All-Sky Survey (RASS) (Bade et al. 1995, Hagen et al. 1995, has recently been classified by Bade et al. (1997; hereafter B97) as a new multiply imaged QSO. B97 show that it consists of at least three objects: two barely resolved components and a third fainter one located 3.1 ′′ away from the other two. They also show that the spectrum of this third fainter component is similar to the combined spectrum of the two bright components. The lensed source is a radio quiet QSO at z = 2.8. Since RASS detections of distant radio quiet QSOs are rare, B97 pointed out that the observed X-ray flux might originate from a galaxy cluster at z ≥ 0.5 within the ROSAT error box. We present here new optical and near-IR high-resolution images of RX J0911.4+0551 obtained with the 2.56m Nordic Optical Telescope (NOT) and the ESO 3.5m New Technology Telescope (NTT). Careful deconvolution of the data allows us to clearly resolve the object into four QSO components and a lensing galaxy. In addition, a candidate galaxy cluster is detected in the vicinity of the four QSO images. We estimate its redshift from the photometric analysis of its member galaxies. Observations and reductions We first observed RX J0911.4+0551 in the K-band with IRAC 2b on the ESO/MPI 2.2m telescope on November 12, 1997. In spite of the poor seeing conditions (∼ 1.3 ′′ ), preliminary deconvolution of the data made it possible to suspect the quadruple nature of this object. Much better optical observations were obtained at the NOT (La Palma, Canary Islands, Spain). Three 300s exposures through the I filter, with a seeing of ∼ 0. ′′ 8 were obtained with ALFOSC under photometric conditions on November 16, 1997. Under non-photometric, but excellent seeing conditions (∼ 0. ′′ 5 − 0. ′′ 8), three 300s I-band exposures, three 300s V -band and five 600s U -band exposures were taken with HIRAC on the night of December 3, 1997. The pixel scales for HIRAC and ALFOSC are 0. ′′ 1071 and 0. ′′ 186, respectively. RX J0911.4+0551 was also the first gravitational lens to be observed with the new wide field near-IR instrument SOFI, mounted on the ESO 3.5m NTT. Excellent K and J images were taken on December 15, 1997, andJanuary 19, 1998 respectively. The 1024 × 1024 Rockwell detector was used with a pixel scale of 0. ′′ 144. The optical data were bias subtracted and flat-field corrected using sky-flats. Fringe-correction was also applied to the I-band data from ALFOSC. Sky subtraction was carried out by fitting low-order polynomial surfaces to selected areas of the frames. Cosmic ray removal was finally performed on the data. The infrared data were processed as explained in Courbin, Lidman & Magain (1998), but in a much more efficient way for SOFI than for IRAC 2b data, since the array used with the former instrument is cosmetically superior to the array used with the latter. Deconvolution of RX J0911.4+0551 All images were deconvolved using the new deconvolution algorithm by Magain, Courbin & Sohy (1998;hereafter MCS). The sampling of the images was improved in the deconvolution process, i.e. the adopted pixel size in the deconvolved image is half the pixel size of the original frames. The final resolution adopted in each band was chosen according to the signal-to-noise (S/N) ratio of the data, the final resolution improving with the S/N. Our NOT HIRAC data were deconvolved down to the best resolution achievable with the adopted sampling (2 pixels FWHM, i.e. 0. ′′ 1), whereas the NOT/ALFOSC frames were deconvolved to a resolution of 3 pixels FWHM, i.e. 0. ′′ 29. Although of very good quality, the near IR-data have a lower S/N than the optical data. The resolution was therefore limited to 5 pixels FWHM in both J and K (0. ′′ 36) in order to avoid noise enhancement. The MCS algorithm can be used to deconvolve simultaneously a stack of individual images or to deconvolve a single stacked image. The results from the simultaneous deconvolutions are displayed in Fig. 1. The four QSO images are labeled A1, A2, A3, and B. The quality of the results was checked from the residual maps, as explained in Courbin et al. (1998). The deconvolution procedure decomposes the images into a number of Gaussian point sources, for which the program returns the positions and intensities, plus a deconvolved numerical background. In the present case, the deconvolution was also performed with an analytical De Vaucouleurs, and exponential disk galaxy profile at the position of the lensing galaxy, in order to better describe its morphology. Table 1 lists the flux of each QSO component, relative to A1, as derived from the simultaneous deconvolutions. Although the HIRAC U , V and I band data were taken during non photometric conditions, they can still be used to determine the relative fluxes between the four images of the QSO. The errors in the relative fluxes are determined from the simultaneous deconvolutions and represent the 1-σ standard deviation in the peak intensities. Results When flux calibration was possible, magnitudes were calculated and the results are displayed in Table 2, which also contains the positions of the four QSO components relative to A1. The astrometric errors are derived by comparing the positions of the components in the different bands. The deconvolved numerical background is used to determine the galaxy position from the first order moment of the light distribution. A reasonable estimate of the error on the lens position was derived from moment measurements through apertures of varying size and on several images obtained by running deconvolutions with different initial conditions. I, J, and K magnitudes for the galaxy were also estimated from the deconvolved background image by aperture photometry (∼ 1.2 ′′ aperture in diameter). The numerical galaxy is elongated in all three bands and the position angle of its major axis is θ G ≃ 140 ± 5 • . In the near-IR, the galaxy looks like it is composed of a bright sharp nucleus plus a diffuse elongated disk. However we can not exclude that the observed elongation is due to an unresolved blend of two or more intervening objects. None of our two analytical profiles fit perfectly the galaxy. The De Vaucouleurs profile (e = 1 − b/a = 0.31 ± 0.07, θ G = 130 ± 10 • ) fits slightly better than the exponential disk light distribution, but still produces residual maps with values as large as 1.5-2 per pixel, compared with a χ 2 of 2.5-5 per pixel for the exponential disk profile. The ellipticity and the position angle derived this way are very uncertain due to the low S/N of the data. Much deeper observations will be required to perform precise surface photometry of the lens(es) and to draw a definite conclusion about its (their) morphology. Field photometry In order to detect any intervening galaxy cluster which might be involved in the overall lensing potential and contributing to the X-ray emission observed by ROSAT, we performed I, J, and K band photometry on all the galaxies in a 2.5 ′ field around the lensed QSO. A composite color image was also constructed from the frames taken through these 3 filters, in order to directly visualize any group of galaxies with similar colors, and therefore likely to be at the same redshift. The color composite is presented in Fig. 2. Aperture photometry was carried out using the SExtractor package (LINUX version 1.2b5, Bertin & Arnouts, 1996). The faintest objects were selected to have at least 5 adjacent pixels above 1.2σ sky , leading to the limiting magnitudes 23.8, 21.6, and 20.0 mag/arcsec 2 in the I, J, and K bands, respectively. The faintest extended object measured in the different bands had magnitudes 23.0, 22.0, and 20.3 in I, J, and K, respectively. The color-magnitude diagram of the field was constructed from the I and K band data which give the widest wavelength range possible with our photometric data. Since the seeing was different in the two bands, particular attention was paid to the choice of the isophotal apertures fitted to the galaxies. They were chosen to be as large as possible, still avoiding too much contamination from the sky noise, as an oversized isophotal aperture would introduce. The color-magnitude diagram of the galaxies in the 2.5 ′ field is displayed in Fig. 3 . Stars are not included in this plot. Note that, because of their proximity, the magnitudes of the two blended members of the cluster candidate (center of the circle in Fig. 2) might be underestimated by as much as 0.3-0.4 magnitudes in both I and K. Models In a first attempt to model the system we chose an elliptical potential of the form where the coordinates x ′ and y ′ are measured along the principal axes of the galaxy, whose position angle θ G is a free parameter. For small ellipticities ǫ, this potential is a good approximation for elliptical mass distributions (Kassiola & Kovner 1993). Additionally, an external shear γ with direction φ is included. Even without assuming an explicit potential ψ, we can determine the minimal γ and ǫ needed to reproduce the observed image positions, by applying the methods given by Witt & Mao (1997). Their methods eliminate the unknown parameters of the model to find constraints for the ellipticity of the galaxy and the external shear. For the shear we predict a minimum of γ min = 0.15 ± 0.07, while ǫ min = 0.075 ± 0.034 (both with 1σ errors). Therefore, neither ellipticity nor shear should be omitted from the modeling. To keep models simple, we used a pseudo-isothermal potential corresponding to an apparent deflection angle of which degenerates to a singular isothermal sphere for vanishing core-radius r c and ellipticity ǫ. The model's parameters are determined by minimizing the χ 2 given by the observed and predicted positions of the images and the galaxy. The best model leads to χ 2 = 0.65 and has parameters as given in Table 3. This value is reasonable for a model with one degree of freedom (r c can not be counted as a free parameter here, because without the restriction of r c ≥ 0 it would become negative in the fit). The positions of the images and the galaxy in Table 3 are parameters of the model and have to be compared with the observed positions in Table 2. As shown above, no elliptical potential without a shear, and no spherical potential with shear can reproduce the observational data. For our pseudo-isothermal model, this results in large χ 2 values of 50.5 and 63.7, respectively. These values are much too high for the two degrees of freedom. Discussion Thanks to our new high-resolution imaging data, the QSO RX J0911.4+0551 is resolved into four images. In addition, deconvolution with the new MCS algorithm reveals the lensing galaxy, clearly confirming the lensed nature of this system. The image deconvolution provides precise photometry and astrometry for all the components of the system. Reddening in components A2 and A3 relative to A1 is observed from our U , V , and I frames that were taken within three hours on the same night. The absence of reddening in component B and the difference in reddening between components A2 and A3 suggest extinction by the deflecting galaxy. Note that although our near-IR data were obtained from 15 days to 6 weeks after the optical images, they appear to be consistent with the optical fluxes measured for the QSO images, i.e. flux ratios increase continuously with wavelength, from U to K, indicating extinction by the lensing galaxy. We have discovered a good galaxy cluster candidate in the SW vicinity of RX J0911.4+0551 from our field photometry in the I, J, and K bands. Comparison of our color-magnitude diagram with that of a blank field (e.g., Moustakas et al. 1997) shows that the galaxies around RX J0911.4+0551 are redder than field-galaxies at an equivalent apparent magnitude. In addition, the brightest galaxies in Fig. 3 lie on a red sequence at I − K ∼ 3.3, typical for the early type members of a distant galaxy cluster. The two dashed lines indicate our ±0.4 color error bars at K ∼ 19 around I − K ∼ 3.3. Most of these galaxies are grouped in the region around a double elliptical at a distance of ∼ 38 ′′ and a position angle of ∼ 204 • relative to A1. This can also be seen in Fig. 2 which shows a group of red galaxies with similar colors centered on the double elliptical (in the center of the circle). Consequently, there is considerable evidence for at least one galaxy cluster in the field. The redshift of our best candidate cluster (the one circled in Fig. 2) can be estimated from the I and K band photometry. We have compared the K-band magnitudes of the brightest cluster galaxies with the empirical K magnitude vs. redshift relation found by Aragón-Salamanca et al. (1998). We find that our cluster candidate, with its brightest K magnitude of about ∼ 17.0, should have a redshift of z ∼ 0.7. A similar comparison has been done in the I-band without taking into account galaxy morphology. We compare the mean I magnitude of the cluster members with the ones found by Koo et al. (1996) for galaxies with known redshifts in the Hubble Deep Field and obtain a cluster redshift between 0.6 and 0.9. Finally, comparison of the I − K color of the galaxy sequence with data and models from Kodama et al. (1998) confirm the redshift estimate of 0.6-0.8. In order to calculate physical quantities from the model parameters found in section 4, we assume a simple model for the cluster which may be responsible for the external shear. For an isothermal potential, the true shear and convergence are of the same order of magnitude. As the convergence is not explicitly included in the model, the deduced shear is a reduced shear leading to an absolute convergence of κ = γ/(1 + γ) = 0.241. For a cluster redshift of z d = 0.7 and with cosmological parameters Ω = 1, λ = 0 this corresponds to a velocity dispersion of about 1100 km s −1 if the cluster is positioned at an angular distance of 40 ′′ . See Gorenstein, Falco & Shapiro (1988) for a discussion of the degeneracy preventing a direct determination of κ. From the direction of the shear φ, (see Table 3) we can predict the position angle of the cluster as seen from the QSO to be 12 • or 192 • . The latter value agrees well with the position of our cluster candidate SW of the QSO images. Note also the good agreement between the position angle θ G derived from the observed light distribution, and the predicted position angle corresponding to our best fitting model of the lensing potential. Interestingly, this is in good agreement with Keeton, Kochanek & Falco (1998) who find that projected mass distributions are generally aligned with the projected light distributions to less than 10 • . The color of the main lensing galaxy is very similar to that of the cluster members, suggesting that it might be a member of the cluster. Using the same model for the cluster as above, assuming the galaxy at the same redshift as the cluster, and neglecting the small ellipticity of ǫ < 0.05, the velocity dispersion of the lensing galaxy can be predicted from the calculated deflection angle α 0 to be of the order of 240 km s −1 . Since the galaxy profile is sharp towards the nucleus in K, we cannot rule out the possibility of a fifth central image of the source, as predicted for non-singular lens models. Near-IR spectroscopy is needed to get a redshift determination of the lens and to show whether it is blended or not with a fifth image of the (QSO) source. Some 10 ′′ SW from the lens, we detect a small group of even redder objects. These red galaxies can be seen in Fig. 2 a few arcseconds to the left and to the right of the cross. They might be part of a second galaxy-group at a higher redshift, and with a position in better agreement with the X-ray position mentioned by B97. However, since the measured X-ray signal is near the detection limit, and the 1-σ positional uncertainty is at least 20 ′′ , the X-ray emission is compatible with both the QSO and these galaxy groups in the field. Furthermore, this second group, at z > 0.7, would most likely be too faint in the X-ray domain to be detected in the RASS. In fact, even our lower redshift cluster candidate would need to have an X-ray luminosity of the order of L 0.1−2.4keV ∼ 7.10 44 erg s −1 (assuming a 6 keV thermal spectrum, H 0 = 50 km s −1 Mpc −1 , q 0 = 0.5), in order to be detected with 0.02 cts s −1 by ROSAT. This is very bright but not unrealistic for high redshift galaxy clusters (e.g., MS 1054-03, Donahue, Gioia, Luppino et al. 1997). RX J0911.4+0551 is a new quadruply imaged QSO with an unusual image configuration. The lens configuration is complex, composed of one main lensing galaxy plus external shear possibly caused by a galaxy cluster at redshift between 0.6 and 0.8 and another possible group at z > 0.7. Multi-object spectroscopy is needed in order to confirm our cluster candidate/s and derive its/their redshift and velocity dispersion. In addition, weak lensing analysis of background galaxies might prove useful to map the overall lensing potential involved in this complex system. Note. -Results obtained from our non-photometric data. All measurements are given along with their 1-σ errors. Koo, D.C., Vogt, N.P., Phillips, A.C., et al., 1996, ApJ, 469, 535 Magain, P., Courbin, F., Sohy, S., 1998, ApJ, 494, 452 Moustakas, L.A., Davis, M., Graham, J.R., et al., 1997, ApJ, 475, 445 Witt, H.J., Mao, S., 1997, MNRAS, 291, 211 This preprint was prepared with the AAS L A T E X macros v4.0. For all the three bands the object is clearly resolved into four QSO images, labeled A1, A2, A3, and B, plus the elongated lensing galaxy. The field of the optical and near IR data are respectively 7 ′′ and 9 ′′ on a side. North is to the top and East to the left in all frames. Fig. 2.-Composite image of a 2 ′ field around RX J0911.4+0551. The frame has been obtained by combining our I, J and K-band data. North is up and East to the left. Note the group of red galaxies with similar colors, about 38 ′′ SW of the quadruple lens (circle) and the group of even redder galaxies 10 ′′ SW of the lens (cross). Fig. 3.-Color magnitude diagram of the 2.5 ′ field around RX J0911.4+0551. The lensing galaxy and many of the cluster members form the overdensity around I − K ∼ 3.3. The lensing galaxy in RX J0911.4+0551 is marked by a star and the two blended galaxies in the center of the cluster candidate are plotted as triangles. Stars are not plotted in this diagram. 0.000 ± 0.004 −0.259 ± 0.007 +0.013 ± 0.008 +2.935 ± 0.002 +0.709 ± 0.026 y( ′′ ) 0.000 ± 0.008 +0.402 ± 0.006 +0.946 ± 0.008 +0.785 ± 0.003 +0.507 ± 0.046 Note. -The astrometry is given relative to component A1 with x and y coordinates defined positive to the North and West of A1. All measurements are given along with their 1-σ errors. The 1-σ errors on the photometric zero points are 0.02 in all bands.
2014-10-01T00:00:00.000Z
1998-03-16T00:00:00.000
{ "year": 1998, "sha1": "fff605a1077dd3201208133e18cac28718559972", "oa_license": null, "oa_url": "https://ui.adsabs.harvard.edu/link_gateway/1998ApJ...501L...5B/PUB_PDF", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fff605a1077dd3201208133e18cac28718559972", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247864530
pes2o/s2orc
v3-fos-license
Impact of early quantitative morbidity on 1-year outcomes in coronary artery bypass graft surgery Abstract OBJECTIVES We applied the Clavien-Dindo Complications Classification (CDCC) and the Comprehensive Complication Index (CCI) to the CORONARY trial to assess whether quantitative early morbidity affects outcomes at 1 year. METHODS All postoperative hospitalization and 30-day follow-up complications were assigned a CDCC grade. CCI were calculated for all patients (n = 4752). Kaplan–Meier analysis examined 1-year mortality and 1-year co-primary outcome (i.e. death, non-fatal stroke, non-fatal myocardial infarction, new-onset renal failure requiring dialysis or repeat coronary revascularization) by CDCC grade. Multivariable logistic regression evaluated the predictive value of CCI for both outcomes. RESULTS For off-pump and on-pump coronary artery bypass graft surgery, median CDCC were 1 [interquartile range: 0, 2] and 2 [1, 2] (P < 0.001), while median CCI were 8.7 [0, 22.6] and 20.9 [8.7, 29.6], respectively (P < 0.001). In on-pump, there were more grade I and grade II complications, particularly grade I and II transfusions (P < 0.001) and grade I acute kidney injury (P = 0.039), and more grade IVa respiratory failures (P = 0.047). Patients with ≥IIIa complications had greater cumulative 1-year mortality (P < 0.001). The median CCI was 8.7 [0, 22.6] in patients who survived and 22.6 [8.7, 44.3] in patients who died at 1 year (P < 0.001). The CCI remained an independent risk factor for 1-year mortality and 1-year co-primary outcome after multivariable adjustment (P < 0.001). CONCLUSIONS On-pump coronary artery bypass graft surgery had a greater number of complications in the early postoperative period, likely driven by transfusions, respiratory outcomes and acute kidney injury. This affects 1-year outcomes. Similar analyses have not yet been used to compare both techniques and could prove useful to quantify procedural morbidity. Clinical trial registration https://www.clinicaltrials.gov/ct2/show/NCT00463294; Unique Identifier: NCT00463294. INTRODUCTION Coronary artery bypass graft surgery (CABG) has been used for the past 50 years to improve mortality in patients with coronary artery disease [1]. Over the years in the USA, procedural inhospital mortality has decreased from 5.5% to 3.1% between 1989 and 2004 [2]. There was a further decrease in operative mortality down to 2.3% in 2017 for isolated CABG [3]. Though a lower mortality rate must constantly be the primary objective, other outcomes could be examined as endpoints to better appreciate the differences in clinical results. There is an ongoing debate on the relative risks and benefits between on-pump and off-pump CABG. On-pump CABG relies on the use of the cardiopulmonary bypass to ensure perfusion while the heart is stopped. Off-pump CABG aims to reduce the morbidity associated with cardiopulmonary bypass by performing the graft anastomoses on a beating and non-supported heart. Each technique has its proponents, and both have been compared in large randomized controlled trials, including the CABG Off-or On-Pump Revascularization Study (CORONARY) trial [4][5][6][7], the Randomized On/Off Bypass trial [8][9][10], the German Off-Pump Coronary Artery Bypass Grafting in Elderly Patients trial [11,12] and the Danish On-pump versus Off-pump Randomization Study trial [13]. Despite more than 10 000 combined randomized patients, short-and long-term studied outcomes have found debatable results between the 2 techniques. This is in part due to variable technique between trials and variable experience with the techniques at a surgical and organizational level. The application of classification scales to quantify procedural morbidity with a simple metric may be of interest. The Clavien-Dindo Complications Classification (CDCC) [14] grades the severity of the worse complication based on invasiveness of treatment, and the Comprehensive Complication Index (CCI) [15] adds the weighted CDCC grades to reflect the overall morbidity burden in individual patients. It has been used for the past 20 years in numerous other surgical specialties and is now considered a gold standard of outcome reporting [16,17]. This was recently adapted for and validated in cardiac surgery using a comprehensive clinical cardiac surgery registry [18] but has never been applied to a large clinical trial. To better characterize the early morbidity in off-pump CABG and on-pump CABG, the CDCC and the CCI were applied to the CORONARY trial cohort. The impact of early morbidity on longterm survival was also assessed. Trial design The CORONARY trial was a randomized controlled trial with blinded adjudication of outcomes comparing isolated off-pump and on-pump CABG. The primary hypothesis was that off-pump CABG would be associated with fewer early major clinical events (30 days) than on-pump CABG and that the benefits of off-pump CABG would be maintained long term at 5 years. We have previously published the trial design, and the results at 30 days [5], 1 year [6] and 5 years [7]. Ethical statement This trial was approved by the ethics committee at each participating centre and was funded by the Canadian Institutes of Health Research. Patients provided written informed consent and the study was conducted in accordance with the ethical standards of the Helsinki Declaration. The authors vouch for the accuracy and completeness of the data and take responsibility for its integrity and the data analysis. Study patients and follow-up As previously described, patients who were scheduled to undergo CABG were eligible to participate in the trial if they required isolated CABG with median sternotomy, provided written informed consent and had one or more of the following risk factors: an age of 70 years or more, peripheral arterial disease, cerebrovascular disease or carotid stenosis of 70% or more of the luminal diameter or renal insufficiency. Patients 60-69 years of age were eligible if they had at least one of the following risk factors (and patients 55-59 years of age were eligible if they had at least 2): diabetes requiring treatment with an oral hypoglycaemic agent or insulin, the need for urgent revascularization after an acute coronary syndrome, a left ventricular ejection fraction of <35% or a history of smoking within 1 year before randomization. Study personnel conducted in-person or telephone follow-up with patients or their next of kin (if patients were not available) at 30 days and at 1 year after the procedure and on a yearly basis until the end of the trial. If a patient indicated that any outcome event had occurred, the patient's physician was contacted to obtain source documents regarding the event. Assessment of postoperative complications The early morbidity assessed in this study encompassed complications collected as part of the CORONARY trial during the immediate postoperative hospitalization and at 30-day follow-up. These complications were graded according to the CDCC, adapted and validated for cardiac surgery [18], presented in Table 1, while the complications recorded in the trial are reported in Table 2. The CDCC is a method of grading complication severity based on the treatment invasiveness required to correct the complication. Early morbidity complications were assigned a CDCC grade according to the usual complication treatment. When assigning the overall CDCC grade to a single patient, the most severe complication grade seen in the patient is used. The CCI was also used to quantify procedural morbidity in each patient [15]. It uses the complications occurring in a single patient and adds the weights of these complications to produce a sum over 100. The maximum score of 100 is reserved for the death of a patient. The composite outcome used as CORONARY trial's co-primary outcome (i.e. death, non-fatal stroke, non-fatal myocardial infarction, new renal failure requiring dialysis or repeat coronary revascularization either by percutaneous coronary intervention or redo CABG) was also used in this analysis at 1-year postrandomization instead of 5 years to specifically examine the impact of early morbidity at 1 year. One-year outcomes were chosen given that we hypothesize that it would more likely be affected by morbidity in the first 30 days compared to 5-year outcomes. Statistical analysis All analyses were conducted as intention-to-treat. Continuous non-normally distributed variables are expressed as median with first and third quartiles, while categorical variables are presented as absolute numbers and percentages (%). The CDCC grades were considered ordinal variables, while the CCI was considered a continuous non-normally distributed variable. Chi-squared, Mann-Whitney U and Kruskal-Wallis tests were used to compare differences in CDCC grade and CCI relative to other variables. For survival, Kaplan-Meier analyses were used to plot the 1-year mortality based on early CDCC grades of patients who had survived the early period (n = 4608). For these analyses, patients who had a grade V complication (death) in the initial postoperative hospitalization or within 30-day follow-up were removed. Additional survival curves were made to plot 1-year mortality in patients by treatment group, as well as plot 1-year coprimary outcome in the overall cohort and by treatment group. When plotting the Kaplan-Meier curves for the co-primary outcome, patients who had the co-primary outcome within 30 days were excluded, since this was also taken into consideration by the CDCC grade. Log-rank tests were used to compare survival curves. Receiver operating characteristic curves were plotted using the predicted probabilities generated through a multivariable logistic regression in the entire cohort for 1-year mortality or 1-year coprimary outcome. Age and sex were forced in the multivariable models due to their strong association with these outcomes. Other factors associated with the two 1-year outcomes were identified through logistic regression models using least absolute shrinkage and selection operator selection methods to identify candidate variables. To account for the effect of individual centres, multivariable analyses using mixed-effects regression models with logit link and a random effect of the centre were then conducted using variables identified through least absolute shrinkage and selection operator selection methods. Variables were manually excluded in a backward selection process until all variables in the final model were significant. A second mixedeffect multivariable model was created in which CCI was added to all the variables of the first multivariable model regardless of the changes in their regression coefficients or P-values. Since CCI was not normally distributed, the variable was categorized in 7 arbitrarily defined categories [CCI = 0 (n = 1299); CCI = 1-19 (n = 1401); CCI = 20-29 (n = 1109); CCI = 30-39 (n = 207); CCI = 40-49 (n = 380); CCI = 50-69 (n = 176); CCI = 70-100 (n = 146)]. The CCI in these cases excluded patients who had died within 30 days as to reduce incorporation bias. Likewise, patients who had died within 30 days or have reached the co-primary outcome at 30 days were excluded. Areas under the receiver operating characteristic curve with 95% confidence intervals were calculated for each model and compared. Assumptions for logistic regression were verified for all models and collinearity was excluded by assessing the correlation between variables which was shown to be minimal. Statistical analyses were conducted with SAS (version 9.4; SAS Institute, Inc., Cary, NC, USA) and statistical significance was set at a = 0.05. Patients The baseline characteristics of the CORONARY trial cohort have previously been described in detail [5]. From November 2006 through October 2011, a total of 4752 patients were enrolled from 49 hospitals in 19 countries and randomized in a 1:1 ratio to undergo off-pump (n = 2375) and on-pump CABG (n = 2377). At the end of the trial, mean follow-up was 4.8 years after randomization and data were available for 98.8% of patients. Impact of early morbidity on survival and co-primary outcome Survival curves differed significantly based on CDCC grade ( Fig. 2A, P < 0.001). Between the 2 techniques, the impact of the grades III and IV complications on long-term mortality was similar ( Fig. 3A for off-pump CABG and Fig. 3B for on-pump CABG). The quantitative early morbidity was lower in patients who survived at 1 year (n = 4514) with a median [interquartile range] CCI In patients who did not reach the co-primary outcome within 30 days, there was significant increase in future occurrence of the co-primary outcome based on CDCC grade (P < 0.001). The coprimary outcome was reached in 2.5% (30/1201) of patients with no complications, 2.3% (29/1278) with grade I, 4.0% (50/1256) with grade II and 5.8% (30/517) with grade III/IV. The median [interquartile range] CCI was 8.7 [0, 22.6] in patients who did not reach the co-primary outcome between 30 days and 1 year compared to 20.9 [8.7, 32.0] in patients who did (P < 0.001). The risk of co-primary outcome was significantly higher with greater complication grades according to the Kaplan-Meier curves for the entire cohort (Fig. 2B), patients undergoing off-pump CABG (Fig. 4A) and patients undergoing on-pump CABG (Fig. 4B). Using multivariable logistic regression, the CCI remained an independent risk factor for adverse outcomes at 1 year for both mortality and co-primary outcome (Table 3). It also increased the predictive value of the multivariable model based on the receiver operating characteristic curves for 1-year mortality (Fig. 5A, P < 0.001), but not for 1-year co-primary outcome (Fig. 5B, P = 0.09). DISCUSSION We have applied a quantitative morbidity analysis to the CORONARY trial by using the CDCC and the CCI to assess early morbidity. We have found that there were less patients in onpump CABG without any complications and the early morbidity The models excluded patients who had reached the outcome within 30 days, and the CCI excluded the score of patients who had died within 30 days. CCI: comprehensive Complication Index; CI: confidence interval; eGFR: estimated glomerular filtration rate in ml/min/1.73 m2; LVEF: left ventricular ejection fraction; OR: odds ratio. was greater in patients undergoing on-pump CABG than offpump CABG. This early morbidity also translates in worse longterm outcomes. This deviates from the previous reports of the trial which did not find significant differences between the 2 treatment groups at 30 days [5] and 1 year [6] for the first co-primary composite outcome of death, non-fatal myocardial infarction, non-fatal stroke or non-fatal new renal failure requiring dialysis. Other secondary outcomes such as cost per procedure and quality-of-life measures were not significantly different [7]. The likely cause for this new finding is that using a quantitative approach to morbidity increases the sensitivity of reported outcomes. The CCI provides a weighted sum of complications in every patient and therefore gives a better understanding of the total postoperative morbidity. Additionally, some early outcomes had already been reported less frequently in off-pump CABG, including rates of bleeding, acute kidney injury and respiratory complications [5]. When combining all early complications, these differences between procedures are made more apparent. The application of this exploratory post hoc analysis is not sufficient on its own to support prioritizing off-pump CABG over on-pump CABG as the major outcomes that have been extensively researched remain more clinically important and similar between both groups. Similarly, other considerations of surgical technique, centre expertise and patient profiles overweigh the significant differences in low-grade complications between both treatment groups. The severity of the early morbidity also affected the 1-year survival. We expected to find increasing cumulative mortality rate with greater complication grades. The survival curves for both off-pump and on-pump CABG were consistent with this hypothesis. Complications of higher order were associated with a decreased survival at 1 year compared to lower-order complications in both treatment groups and the entire cohort with a greater than three-fold increase in 1-year mortality or coprimary outcome with grade III reinterventions or grade IV intensive care unit admissions for organ failures. This increase in risk is consistent with other studies which have applied the CDCC/CCI but may be even more marked in cardiac surgery. This could reflect the greater burden on patients who require this kind of postoperative management on long-term organ function. For instance, patients with high postoperative CCI (> _26.2) following gastric cancer resection had a cancer-specific survival of 46.3% at 5 years, compared to 54.9% in patients with less postoperative morbidity (P = 0.009) [19]. In a cohort of patients following colorectal cancer surgery, the CCI was associated with an increase in mortality at 5 years with a hazard ratio of 1.22 (P = 0.02) [20]. The application of the CDCC/CCI to CORONARY demonstrates the potential for quantitative morbidity as an outcome measure in clinical trials. It can measure the number and severity of postoperative complications thereby giving a more accurate picture of procedural morbidity. It adds granularity by allowing different gradings for a same complication, depending on invasiveness of treatment. For instance, a wound infection requiring antibiotics is less severe than a wound infection requiring extensive debridement in the operating room which is in turn less severe than a wound infection causing septic shock. Finally, it gives a more holistic approach to defining clinical outcomes benchmarks for quality-of-care improvement initiatives. Limitations The main limitation is that the CORONARY trial was not originally designed using the CDCC/CCI for morbidity. Therefore, the sample size calculation was done using the co-primary outcomes as part of the original CORONARY trial design. Applying the CDCC/CCI as a post hoc analysis can therefore only be speculative and exploratory in its conclusions. Ideally, a system of complications grading would be implemented at the start of the trial to gather more granularity regarding the severity of each complication. As such, other complications that were not originally included in the study protocol (e.g. gastroenterological complications) could not be included in this analysis. In addition, the blinded adjudication of outcomes used in this trial was only applied to the components of the primary outcome and recurrent angina. This allows for different outcome reporting between centres in the study based on the local interpretation of study definitions, though this is unlikely to cause a differential bias between procedures given the treatment randomization which would have distributed patients evenly among centres. An inherent component of the CDCC/CCI is the use of invasiveness of treatment to assess severity of a complication. As such, this introduces a medical decision bias that depends on the management method of certain complications between centres and between surgeons (e.g. dialysis in acute kidney injury, transfusions and repeat revascularizations). These outcomes were, however, used in the original trial and were therefore considered relevant to be included in this analysis. Given that the randomization in the CORONARY trial was done within each centre, this should also not produce a significant differential bias between both treatment groups but must be considered in the interpretation of the results. The observation interval for the early complications was also arbitrarily defined as in-hospital and 30-day morbidity, though additional complications impacting long-term mortality may have occurred after this period. The method of follow-up based on interviews with patients or their next of kin at 30 days and 1 year can also introduce incomplete reporting of morbidity for which we could not account. In conclusion, on-pump CABG seems to be associated with a greater number of complications in the early postoperative period, when using the CDCC and the CCI to measure the number and severity of complications. This is likely driven by increased transfusions, respiratory outcomes and acute kidney injury. This early morbidity within 30 days also seems to affect 1-year outcomes. The application of this quantitative morbidity approach has never been used in the debate between on-pump CABG and off-pump CABG. This also suggests a usefulness for the CDCC/ CCI to better quantify procedural morbidity in clinical trials. The implementation of these systems in prospectively collected databases would allow further confirmation of the present findings.
2021-11-19T06:17:06.610Z
2021-11-11T00:00:00.000
{ "year": 2021, "sha1": "ff70d5cf72346b95bf81ad6ed876560cda6938a1", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/icvts/advance-article-pdf/doi/10.1093/icvts/ivab316/41132509/ivab316.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b20421767fe18727a7eacf741c480d6f1b5e6dd5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260526114
pes2o/s2orc
v3-fos-license
Planck Early Results: All sky temperature and dust optical depth from Planck and IRAS: Constraints on the"dark gas"in our galaxy We construct an all-sky map of the apparent temperature and optical depth of thermal dust emission using the Planck-HFI and IRAS data. The optical depth maps are correlated to tracers of the atomic and molecular gas. The correlation is linear in the lowest column density regions at high galactic latitudes. At high NH, the correlation is consistent with that of the lowest NH. In the intermediate NH range, we observe departure from linearity, with the dust optical depth in excess to the correlation. We attribute this excess emission to thermal emission by dust associated with a Dark-Gas phase, undetected in the available HI and CO measurements. We show the 2D spatial distribution of the Dark-Gas in the solar neighborhood and show that it extends around known molecular regions traced by CO. The average dust emissivity in the HI phase in the solar neighborhood follows roughly a power law distribution with beta = 1.8 all the way down to 3 mm, although the SED flattens slightly in the millimetre. The threshold for the existence of the Dark-Gas is found at NH = (8.0\pm 0.58) 10^{20} Hcm-2. Assuming the same dust emissivity at high frequencies for the dust in the atomic and molecular phases leads to an average XCO = (2.54\pm0.13) 10^{20} H2cm-2/(K km s-1). The mass of Dark-Gas is found to be 28% of the atomic gas and 118% of the CO emitting gas in the solar neighborhood. A possible origin for the Dark-Gas is the existence of a dark molecular phase, where H2 survives photodissociation but CO does not. The observed transition for the onset of this phase in the solar neighborhood (AV = 0.4 mag) appears consistent with recent theoretical predictions. We also discuss the possibility that up to half of the Dark-Gas could be in atomic form, due to optical depth effects in the HI measurements. Introduction The matter that forms stars, that is left over after star formation, or that has never experienced star formation comprises the in-The distribution of diffuse interstellar gas, by which we mean gas not in gravitationally-bound structures and not in the immediate vicinity of active star-formation regions, has primarily been assessed using the 21-cm hyperfine line of atomic hydrogen. That line is easily excited by collisions and is optically thin for gas with temperature T K > 50 K and velocity dispersion δV > 10 km s −1 as long as the column density is less than 9 × 10 21 cm −2 (Kulkarni & Heiles 1988). Such conditions are typical of the diffuse ISM pervaded by the interstellar radiation field (ISRF), because photoelectric heating from grain surfaces keeps the gas warm (T > 50 K), and observed velocity dispersions (presumably due to turbulence) are typically > 10 km s −1 . Based on the observed dust extinction per unit column density, N(HI)/A V = 1.9 × 10 21 cm −2 mag −1 (Bohlin et al. 1978), the upper column density for optically thin 21-cm lines corresponds to visible extinctions A V < 4.7. Thus the 21-cm line is expected to trace diffuse, warm atomic gas accutately throughout the diffuse ISM, except for lines of sight that are visibly opaque or are particularly cold. Molecular gas is typically traced by the 2.6-mm 12 CO(J=1→0) rotational line in emission, which, like the 21-cm H i line, can be easily excited because it involves energy levels that can be obtained by collisions. The CO emission line, however, is commonly optically thick, due to its high radiative transition rate. In the limit where the lines are optically thick, the primary information determining the amount of molecular gas in the beam is the line width. If the material is gravitationally bound, then the virial mass is measured and CO can be used as a tracer of molecular mass. It is common astronomical practice to consider the velocity-integrated CO line intensity as measuring the molecular column density, with the implicit assumption that the material is virialized and the mass of the virialized structures is being measured. In the diffuse ISM, these conditions typically do not apply. On a physical scale of R (measured in parsecs), interstellar material is only virialized if its column density N > 5.2 × 10 21 δV 2 R −1 cm −2 where δV is the velocity dispersion (measured in km s −1 ). Thus the diffuse ISM is typically gravitationally unbound, invalidating the usage of CO as a virial tracer of the molecular gas mass, except in very compact regions or in regions that are visibly opaque. Although CO can emit in gas with low density, the critical density required for collisional equilibrium is of order 10 3 cm −3 , which further complicates the usage of CO as a tracer. This again is not typical of the diffuse ISM. To measure the amount and distribution of the molecular ISM, as well as the cold atomic ISM, other tracers of the interstellar gas are required. At least three tracers have been used in the past. These are UV absorption in Werner bands of H 2 , infrared emission from dust, and γ-ray emission from pion productiondue to cosmic-rays colliding with interstellar nucleons. The UV absorption is exceptionally sensitive to even very low H 2 column densities of 10 17 cm −2 . Using Copernicus (Savage et al. 1977) and FUSE data, atomic and molecular gas could be measured simultaneously on the sightlines to UV-bright stars and some galaxies. A survey at high Galactic latitudes with FUSE showed that the molecular fraction of the ISM, f(H 2 ) ≡ 2N(H 2 )/[2N(H 2 ) + N(HI)] < 10 −3 for lines of sight with total column density less than 10 20 cm −2 , but there is a tremendous dispersion from 10 −4 to 10 −1 for higher-column density lines of sight (Wakker 2006). Since UV-bright sources are preferentially found towards the lowest-extinction sightlines, an accurate average f(H 2 ) is extremely difficult to determine from the stellar absorption measurements. Along lines of sight toward AGNs behind diffuse interstellar clouds, Gillmon & Shull (2006) found molecular hydrogen fractions of 1-30% indicating significant molecular content even for low-density clouds. The dust column density has been used as a total gas column density tracer, with the assumption that gas and dust are well mixed. The possibility that dust traces the column density better than H i and CO was recognized soon after the first all-sky infrared survey by IRAS , which for the first time revealed the distribution of dust on angular scales down to 5 . Molecular gas without CO was inferred from comparing IRAS 100 µm surface brightness to surveys of the 21-cm and 2.6-mm lines of H i and CO on 9 or degree scale de Vries et al. (1987); Heiles et al. (1988); Blitz et al. (1990). At 3 scale using Arecibo, the cloud G236+39 was found to have significant infrared emission unaccounted for by 21-cm or 2.6-mm lines, with a large portion of the cloud being possibly H 2 with CO emission below detection threshold (Reach et al. 1994). Meyerdierks & Heithausen (1996) also detected IR emission surrounding the Polaris flare in excess of what was expectated from the H i and CO emission, which they attributed to diffuse molecular gas. The all sky farinfrared observations by COBE -DIRBE (Hauser et al. 1998) made it possible to survey the molecular gas not traced by H i or CO at the 1 • scale (Reach et al. 1998). This revealed numerous "infrared excess" clouds, many of which were confirmed as molecular after detection of faint CO with NANTEN (Onishi et al. 2001). Finally, there are also indications of more dust emission than seen in nearby external galaxies such as the Large Magellanic Cloud (Bernard et al. 2008;Roman-Duval et al. 2010) and the Small Magellanic Cloud (Leroy et al. 2007). This suggests that large fractions of the gas masses of these galaxies are not detected using standard gas tracers. The γ-rays from the interstellar medium provide an independent tracer of the total nucleon density. As was the case with the dust column density, the γ-ray inferred nucleon column density appears to show an extra component of the ISM not associated with detected 21-cm or 2.6-mm emission; this extra emission was referred to as "dark gas" (e.g. Grenier et al. 2005;Abdo et al. 2010), a term we will adopt in this paper to describe interstellar material beyond what is traced by H i and CO emission. Grenier et al. (2005) inferred dark gas column densities of order 50% of the total column density toward locations with little or beyound detection threshold CO emission, and general consistency between infrared and γ-ray methods of detection. Recent observations using FERMI have significantly advanced this method, allowing γ-ray emission to be traced even by the high-latitude diffuse ISM. In the Cepheus, Cassiopeia, and Polaris Flare clouds, the correlated excess of dust and γ rays yields dark gas masses that range from 40 % to 60 % of the CO-bright molecular mass (Abdo et al. 2010). Theoretical work predicts a dark molecular gas layer in regions where the balance between photodissociation and molecular formation allows H 2 to form in significant quantity while the gas-phase C remains atomic or ionized (Wolfire et al. 2010;Glover et al. 2010). In this paper we describe new observations made with Planck 1 (Planck Collaboration 2011a) that trace the distribution of submillimeter emission at 350 µm and longer wavelengths. In combination with observations up to 100 µm wavelength by IRAS and COBE -DIRBE , we are uniquely able 1 Planck (http://www.esa.int/Planck) is a project of the European Space Agency (ESA) with instruments provided by two scientific consortia funded by ESA member states (in particular the lead countries: France and Italy) with contributions from NASA (USA), and telescope reflectors provided in a collaboration between ESA and a scientific consortium led and funded by Denmark. to trace the distribution of interstellar dust with temperatures down to ∼ 10 K. The surface brightness sensitivity of Planck, in particular on angular scales of 5 to 7 • , is unprecedented. Because we can measure the dust optical depth more accurately by including the Planck data, we can now reassess the relationship between dust and gas, and relate it to previous infrared and independent UV and γ-ray results, and compare it to theoretical explanations to determine just how important the dark gas is for the evolution of the interstellar medium. Planck data The Planck first mission results are presented in Planck Collaboration (2011a) and the in-flight performances of the two focal plane instruments HFI (High Frequency Instrument) and LFI (Low Frequency Instrument) are given in Planck HFI Core Team (2011a) and Mennella et al. (2011) respectively. The data processing and calibration of the HFI and LFI data used here is described in Planck HFI Core Team (2011b) and Planck Collaboration (2011b) respectively. Here we use only the HFI (DR2 release) data, the processing and calibration of which are described in Planck HFI Core Team (2011b). In this data the CMB component was identified and subtracted through a Needlet Internal Linear Combination (NILC) (Planck HFI Core Team 2011b). We use the internal variance on intensity (σ 2 II ) estimated during the Planck data processing and provided with the Planck-HFI data, which we assume represents the white noise on the intensity. Note that this variance is inhomogeneous over the sky, owing to the Planck scanning strategy (Planck Collaboration 2011a), with lower values in the Planck deep fields near the ecliptic poles. We have checked that, within a small factor (< 2), the data variance above is consistent with "Jack-Knife" maps obtained from differencing the two halves of the Planck rings. We also use the absolute uncertainties due to calibration uncertainties given in Planck HFI Core Team (2011b) for HFI and summarized in Table 1. We note that, for a large scale analysis such as carried out here, variances contribute to a small fraction of the final uncertainty resulting from combining data over large sky regions, so that most of the final uncertainty is due to absolute uncertainties. HI data In order to trace the atomic medium, we use the LAB (Leiden/Argentine/Bonn) survey which contains the final data release of observations of the H i 21-cm emission line over the entire sky (Kalberla et al. 2005). This survey merged the Leiden/Dwingeloo Survey (Hartmann & Burton 1997) of the sky north at δ > −30 • with the IAR (Instituto Argentino de Radioastronomia) Survey (Arnal et al. 2000;Bajaja et al. 2005) of the Southern sky at δ < −25 • . The angular resolution and the velocity resolution of the survey are ∼ 0.6 • and ∼ 1.3 km s −1 . The LSR velocity range −450 < V LSR < 400 km s −1 is fully covered by the survey with 891 channels with a velocity separation of ∆V ch = 1.03 km s −1 . The data were corrected for stray radiation at the Institute for Radioastronomy of the University of Bonn. The rms brightnesstemperature noise of the merged database is slightly lower in the southern sky than in the northern sky, ranging over 0.07-0.09 K. Residual uncertainties in the profile wings, due to defects in the correction for stray radiation, are for most of the data below a level of 20 to 40 mK. We integrated the LAB data in the velocity range −400 < V LSR < 400 km s −1 to produce an all sky map of the H i integrated intensity (W HI ), which was finally projected into the HEALPix pixelisation scheme using the method described in Sect. 2.3.1. We estimate the noise level of the W HI map as ∆T rms ∆V ch √ N ch where N ch (= 777) is the number of channels used for the integration, and ∆T rms is the rms noise of the individual spectra measured in the emission-free velocity range mainly in −400 < V LSR < 350 km s −1 . The resulting noise of the W HI map is mostly less than ∼ 2.5 Kkms −1 all over the sky with an average value of ∼ 1.7 Kkms −1 , except for some limited positions showing somewhat larger noise (∼ 10 Kkms −1 ). CO data In order to trace the spatial distribution of the CO emission, we use a combination of 3 large scale surveys in the 12 CO(J=1→0) line. In the Galactic plane, we use the Dame et al. (2001) survey obtained with the CfA telescope in the north and the CfA-Chile telescope in the south, referred to here as DHT (Dame, Hartmann & Thaddeus). These data have an angular resolution of 8.4 ±0.2 and 8.8 ± 0.2 respectively. The velocity coverage and the velocity resolution for these data vary from region to region on the sky, depending on the individual observations composing the survey. The most heavily used spectrometer is the 500 kHz filter bank providing a velocity coverage and resolution of 332 km s −1 and 1.3 km s −1 , respectively. Another 250 kHz filter bank providing the 166 km s −1 coverage and 0.65 km s −1 resolution was also frequently used . The rms noises of these data are suppressed down to 0.1-0.3 K (for details, see their Table 1). The data cubes have been transformed into the velocity-integrated intensity of the line (W CO ) by integrating the velocity range where the CO emission is significantly detected using the moment method proposed by Dame (2011). The noise level of the W CO map is typically We also use the unpublished high latitude survey obtained using the CfA telescope (Dame et al. 2010, private communication). This survey is still on-going and covers the northern sky up to latitudes as high as |b II | = 70 • which greatly increases the overall sky coverage. The noise level of the CO spectra are suppressed to ∼ 0.18 K for the 0.65 km s −1 velocity resolution, and the total CO intensity was derived by integrating typically 10-20 velocity channels, which results in a noise level of 0.4-0.6 Kkms −1 . Finally, we combined the above survey with the NANTEN 12 CO(J=1→0) survey obtained from Chile. This survey complements some of the intermediate Galactic latitudes not covered by the Dame et al. (2001) maps with an angular resolution of 2.6 . Most of the survey along the Galactic plane has a velocity coverage of ∼ 650 km s −1 with a wide band spectrometer, but a part of the survey has a coverage of ∼ 100 km s −1 with a narrow band spectrometer. The noise level achieved was 0.4-0.5 K at a velocity resolution of 0.65 km s −1 . The CO spectra were sampled with the 2 grid in the Galactic centre, and with the 4 and 8 grid along the Galactic plane in the latitude range |b| < 5 • and |b| > 5 • , respectively. The integrated intensity maps were obtained by integrating over the whole velocity range, excluding regions of the spectra where no emission is observed. The resulting rms noise in the velocity-integrated intensity map varies depending on the width of the emission. This survey along the Galactic plane is still not published in full, but parts of the survey have been analyzed (e.g. Fukui et al. (1999); Matsunaga et al. (2001); Mizuno & Fukui (2004)). A large amount of the sky at intermediate Galactic latitude toward the nearby clouds is also covered with a higher velocity resolution of ∼ 0.1 km s −1 with a narrow band spectrometer with a 100 km s −1 band (e.g. Onishi et al. (1999); Kawamura et al. (1999); Mizuno et al. (2001)). The velocity coverage, the grid spacing, and the noise level for these data vary, depending on the characteristics of the individual clouds observed, but the quality of the data is high enough to trace the total CO intensity of the individual clouds. The three surveys were repixelised into the HEALPix pixelisation scheme (Górski et al. 2005) with the appropriate pixel size to ensure Shannon sampling of the beam (Nside=2048 for the NANTEN2 survey and Nside=1024 for the CfA surveys) using the procedure described in Sect. 2.3.1. Each survey was smoothed to a common resolution of 8.8 through convolution with a Gaussian with kernel size adjusted to go from the original resolution of each survey to a goal resolution of 8.8 , using the smoothing capabilities of the HEALPix software. We checked the consistency of the different surveys in the common region observed with NANTEN and CfA. We found a reasonably good correlation between the two but a slope indicating that the NANTEN survey yields 24% larger intensities than the CfA values. The origin of this discrepancy is currently unknown. We should note that the absolute intensity scale in CO observations is not highly accurate as noted often in the previous CO papers. Since the CfA survey covers most of the regions used in this paper and has been widely used for calibrating the H 2 mass calibration factor X CO , in particular by several generations of gamma ray satellites, we assumed the CfA photometry when merging the data, and therefore rescaled the NANTEN data down by 24% before merging. Note that this an arbitrary choice. The implications on our results will be discussed in Sec. 6.1. The 3 surveys were then combined into a single map. In doing so, data from different surveys falling into the same pixel were averaged using σ 2 as a weight. The resulting combined map was then smoothed to the resolution appropriate to this study. The resulting CO integrated intensity map is shown in Fig. 1. IR data We use the IRIS (Improved Reprocessing of the IRAS Survey) IRAS 100 µm data (Miville-Deschênes & Lagache 2005, see) in order to constrain the dust temperature. The data, provided in the original format of individual tiles spread over the entire sky were combined into the HEALPix pixelisation using the method described in Sect. 2.3.1 at a HEALPix resolution (Nside = 2048 corresponding to a pixel size of 1.7 ). The IRAS coverage maps were also processed in the same way. We assume the noise properties given in Miville-Deschênes & Lagache (2005) and given in Table 1. The noise level of 0.06 MJy sr −1 at 100 µm was assumed to represent the average data noise level and was appropriately multiplied by the coverage map to lead to the pixel variance of the data. Common angular resolution and pixelisation The individual maps are then combined into HEALPix using the intersection surface as a weight. This procedure was shown to preserve photometry accuracy. The ancillary data described in Sect. 2.2 were brought to the HEALPix pixelisation, using a method where the surface of the intersection between each HEALPix pixel with each FITS pixel of the survey data is computed and used as a weight to regrid the data. The HEALPix resolution was chosen so as to match the Shannon sampling of the original data at resolution θ, with a HEALPix resolution set so that the pixel size is < θ/2.4. The ancillary data and the description of their processing will be presented in Paradis & et. al. (2011). All ancillary data were then smoothed to an appropriate resolution by convolution with a Gaussian smoothing function with appropriate FWHM using the smoothing HEALPix function, and were brought to a pixel size matching the Shannon sampling of the final resolution. Background levels Computing the apparent temperature and optical depth of thermal dust over the whole sky requires accurate subtraction of any offset (I 0 ν ) in the intensity data, either of instrumental or astrophysical origin. Although both the IRIS and the Planck-HFI data used in this study have been carefully treated with respect to residual offsets during calibration against the FIRAS data, the data still contains extended sources of emission unrelated to the Galactic emission, such as the Cosmic InfraRed Background (CIB) signal (Miville-Deschênes et al. 2002;Planck Collaboration 2011d) or zodical light which could affect the determination of the dust temperature and optical depth at low surface brightness. In order to estimate the above data offsets, we first compute the correlation between IR and H i emission in a reference re- gion such that |b II | > 20 • and N HI H < 1.2 × 10 21 Hcm −2 . This was done using the IDL regress routine and iterative removal of outliers. The derived dust emissivities ((I ν /N H ) ref ) are given in Table 2. The uncertainties given are those derived from the correlation using the data variance as the data uncertainty. The derived emissivities are in agreement with the ensemble average of the values found for the local H i velocities in Planck Collaboration (2011i) (see their Table 2) for individual smaller regions at high Galactic latitude, within the uncertainties quoted in Table 2. Note that these emissivities are used only to derive the offsets in this study. We then select all sky pixels with minimum H i column density defined as N HI H < 2.0 × 10 19 Hcm −2 and compute the average H i column density in this region to be N hole H = 1.75×10 19 Hcm −2 . The offsets are then computed assuming that the dust emissivity in this region is the same as in the reference region, ie, where I hole ν is the average brightness in the hole region at frequency ν. The offset values derived from the above procedure are given in Table 2 and were subtracted from the maps used in the rest of Table 2 were derived from the emissivity uncertainties propagated to the offset values through Eq. 1. When subtracting the above offsets from the IRAS and Planck intensity maps, the data variances were combined with the offset uncertainties in order to reflect uncertainty on the offset determination. Note that, for consistency and future use, Table 2 also lists emissivities and offset values for FIR-mm datasets not used in this study. Note also that these offsets for Planck data are not meant to replace the official values provided with the data, since they suppress any large scale emission not correlated with H i, whatever their origin. Temperature determination As shown in previous studies (e.g. Reach et al. 1995;Finkbeiner et al. 1999;Paradis et al. 2009; Planck Collaboration 2011e,i), the dust emissivity spectrum in our Galaxy cannot be represented by a single dust emissivity index β over the full FIR-submm domain. The data available indicate that β is usually steeper in the FIR and flatter in the submm band, with a transition around 500 µm. As dust temperature is best derived around the emission peak, we limit the range of frequencies used in the determina-tion to the FIR, which limits the impact of a potential change of β with frequency. In addition, the dust temperature derived will depend on the assumption made about β, since these two parameters are somewhat degenerate in χ 2 space. In order to minimize the above effect, we derived dust temperature maps using a fixed value of the dust emissivity index β. The selected β value was derived by fitting each pixel of the maps with a modified black body of the form I ν ∝ ν β B ν (T D ) in the above spectral range (method referred to as "free β"). This leads to a median value of T D = 17.7 K and β = 1.8 in the region at |b II | > 10 • . Note that the β value is consistent with that derived from the combination of the FIRAS and Planck-HFI data at low column density in Planck Collaboration (2011i). Inspection of the corresponding T D and β maps indeed showed spurious values of both parameters, caused by their correlation and the presence of noise in the data, in particular in low brightness regions of the maps. We then performed fits to the FIR emission using the fixed β = 1.8 value derived above (method referred to as "fixed β"). In the determination of T D , we used the IRIS 100 µm map and the two highest HFI frequencies at 857 and 545 GHz. Although the median reduced χ 2 is slightly higher than for the "free β" method, the temperature maps show many fewer spurious values, in particular in low brightness regions. This results in a sharper distribution of the temperature histogram. Since we later use the temperature maps to investigate the spectral distribution of the dust optical depth and the dust temperature is a source of uncertainty, we adopt the "fixed β" method maps in the following. The corresponding temperature and uncertainty maps are shown in Fig. 3. Temperature maps were derived at the common resolution of those three channels as well as at the resolution of lower intensity data. The model was used to compute emission in each photometric channels of the instruments used here (IRAS , Planck-HFI ), taking into account the colour corrections using the actual transmission profiles for each instrument and following the adopted flux convention. In the interest of computing efficiency, the predictions of a given model were tabulated for a large set of parameters (T D , β). For each map pixel, the χ 2 was computed for each entry of the table and the shape of the χ 2 distribution around the minimum value was used to derive the uncertainty on the free parameters. This included the effect of the data variance σ 2 II and the absolute uncertainties. Angular distribution of dust temperature The all-sky map of the thermal dust temperature computed as described in Sec. 3.1 for β = 1.8 is shown in Fig. 5. The elongated regions with missing values in the map correspond to the IRAS gaps, where the temperature cannot be determined from the Planck-HFI data alone. The distribution of the temperature clearly reflects the large scale distribution of the radiation field intensity. Along the Galactic plane, a large gradient can be seen from the outer Galactic regions, with T D 14 − 15 K to the inner Galactic regions around the Galactic center regions with T D 19 K. This asymmetry was already seen at lower angular resolution in the DIRBE (Sodroski et al. 1994) and the FIRAS (Reach et al. 1995) data. The asymmetry is probably due to the presence of more massive stars in the inner Milky Way regions, in particular in the molecular ring. The presence of warmer dust in the inner Galaxy is actually clearly highlighted by the radial distribution of the dust temperature derived from Galactic inversion of IR data (e.g. Sodroski et al. 1994;Paladini et al. 2007; It may therefore correspond to warm dust associated with hot gas pervading the local bubble around the Sun, or a pocket of hot gas in Loop I. Similar large regions with enhanced dust temperature, such as near (l II ,b II )=(340 • ,−30 • ) or (l II ,b II )=(315 • ,+30 • ) may have a similar origin. Loop I (l II ,b II )=(30 • ,+45 • ) is seen as a slightly warmer than average structure at T D 19 K. Running parallel to it is the Aquila-Ophiuchus flare (l II ,b II )=(30 • ,+20 • ) with apparent T D 14 K extending to latitudes as high as 60 • . The Cepheus and Polaris Flare (l II ,b II )=(100-120 • ,+10-+20 • ) (see Planck Collaboration 2011i, for a detailed study) is also clearly visible as a lower temperature arch extending up to b II =30 • into the North Celestial Pole loop and containing a collection of even colder condensations (T D 12 − 13 K). On small angular scales, which are accessible over the whole sky only with the combination of the IRAS and Planck-HFI data at 5 , the map shows a variety of structures that can all be identified with local heating by known single stars or H ii regions for warmer spots and with molecular clouds for colder regions. Figure 4 illustrates the high resolution spatial distribution of dust temperature and dust optical depth around some of these regions. Warmer regions include the tangent directions to the spiral Galactic arms in Cygnus (l II ,b II )=(80 • ,0 • ) and Carina (l II ,b II )=(280 • ,0 • ), hosts to many OB associations, and many H ii regions along the plane. At higher Galactic latitude, dust heated by individual hot stars such in the Ophiuchi region (l II ,b II )=(340 • ,+20 • ) with individual stars σ − S co, ν − S co, ρ − Oph, ζ − Oph, in Orion (l II ,b II )=(210 • ,−20 • ) with the Trapezium stars or in Perseus-Taurus (l II ,b II )=(160 • ,−20 • ) with the California Nebula (NGC1499) can clearly be identified. Note the Spica HII region at (l II ,b II )=(300 • ,+50 • ) where dust temperatures are T D 20 K due to heating by UV photons from the nearby (80 pc) early-type, giant (B1III) star α Vir. At intermediate and high latitudes, nearby molecular clouds generally stand out as cold dust environments with T D 13 K. The most noticeable ones are Taurus Near the Galactic poles, the temperature determination becomes noisy at the 5 resolution due to the low signal levels. Optical depth determination Maps of the thermal dust optical depth (τ D (λ)) are derived using: where B ν is the Planck function and I ν (λ) is the intensity map at frequency ν. We used resolution-matched maps of T D and I ν (λ) and derived τ D (λ) maps at the various resolutions of the data used here. The maps of the uncertainty on τ D (λ) (∆τ D ) are computed as: Dust/Gas correlation We model the dust opacity (τ M ) as where τ D N H ref is the reference dust emissivity measured in low N H regions and X CO = N H 2 /W CO is the traditional H 2 /CO conversion factor. It is implicitly assumed that the dust opacity per unit gas column density is the same in the atomic and molecular gas. If this is not the case, this will directly impact our derived X CO since only the product of X CO by the dust emissivity in the CO phase τ D N H CO can be derived here. The fit to derive the free parameters of the model is performed only in the portion of the sky covered by all surveys (infrared, H i, and CO) and where either (1) the extinction is less than a threshold A DG V , or (2) the CO is detected with W CO > 1 Kkms −1 . Criterion (1) selects the lowcolumn density regions that are entirely atomic and suffer very small H i optical depth effects, so that the dust in this region will be associated with the H i emission at 21-cm. Criterion (2) selects regions where the CO is significantly detected and the dust is associated with both the H i and the 12 CO emission lines. We fit for the following three free parameters: τ D N H ref , X CO and A DG V . The threshold A DG V measures the extinction (or equivalently the column density) where the correlation between the dust optical depth and the Hi column density becomes non-linear. The correlation between the optical depth for various photometric channels and the total gas column density (N tot H = N HI + 2X CO W CO ) is shown in Fig. 6. The correlations were computed in the region of the sky where the CO data is available (about 63% of the sky) and at Galactic latitudes larger than b II > 10 • . The τ D and W CO maps used were smoothed to the common resolution of the H i map (0.6 • ). For these plots, we used a fixed value of X CO = 2.3×10 20 H 2 cm −2 /(K km s −1 becomes dominated by the CO contribution, the dust optical depth again is consistent with the observed correlation at low N tot H for this given choice of the X CO value. Between these two limits, the dust optical depth is in excess of the linear correlation. The same trend is observed in all photometric channels shown, with a similar value for the threshold. It is also observed in the HFI bands at lower frequencies, but the increasing noise at low N tot H prevents an accurate determination of the fit parameters. The best fit parameters for τ D N H ref , X CO and A DG V are given in Table 3. They were derived separately for each frequency. The uncertainty was derived from the analysis of the fitted χ 2 around the best value. The τ D N H ref values decrease with increasing wavelength, as expected for dust emission. The resulting dust optical depth SED is shown in Fig. 7. The dust optical depth in low column density regions is compatible with β = 1.8 at high frequencies. The best fit β value between the IRAS 100 µm and the HFI 857 GHz is actually found to be β = 1.75. The SED then flattens slightly at intermediate frequencies with a slope of β = 1.57 around λ = 500 µm then steepens again to β = 1.75 above 1 mm. The X CO values derived from the fit are constant within the error bars, which increase with wavelength. The average value, computed using a weight proportional to the inverse variance is given in Table 3 and is found to be X CO = 2.54 ± 0.13 × 10 20 H 2 cm −2 /(K km s −1 ). Similarly, the 0.4 parameter does not significantly change over the whole frequency range and the weighted average value is found to be A DG V = 0.4 ± 0.029 mag. The excess column density is defined using the difference between the best fit and the observed dust opacity per unit column All maps are shown in Galactic coordinates with the Galactic centre at the centre of the image. The missing data in all images correspond to the IRAS gaps. The upper and lower bounds of the colour scale are set to τ min = 5 × 10 −5 × (λ/100 µm) −1.8 and τ max = 10 −2 × (λ/100 µm) −1.8 respectively. density using, The N x H map is used to derive the total excess mass (M x H ) assuming a fiducial distance to the gas responsible for the excess. We also computed the atomic and molecular total gas masses over the same region of the sky, assuming the same distance. In Table 3. On average, at high Galactic latitudes, the dark gas masses are of the order of 28%% ± 3%% of the atomic gas mass and 118%% ± 12%% of the molecular mass. Dark-gas spatial distribution The spatial distribution of the dark gas as derived from τ D computed from the HFI 857 GHz channel is shown in Fig. 8. It is shown in the region where the CO data are available and above Galactic latitudes of |b II | > 5 • . Regions where W CO > 1 Kkm/s have also been excluded. The map clearly shows that the dark gas is distributed mainly around the best known molecular clouds such as Taurus, the Cepheus and Polaris flares, Chamaeleon and Orion. The strongest excess region is in the Aquila-Ophiuchus flare, which was already evident in Grenier et al. (2005). Significant dark gas is also apparent at high latitudes, south of the Galactic plane in the anticenter and around known translu-cent molecular clouds, such as MBM53 (l II = 90 • , b II = −30 • ). As with all the molecular clouds, the spatial distribution of the dark gas closely follows that of the Gould-Belt (Perrot & Grenier 2003) and indicates that most of the dark gas in the solar neighbourhood belongs to this dynamical structure. Dust emissivity in the atomic neutral gas In the solar neighbourhood, Boulanger et al. (1996) measured an emissivity value in the diffuse medium of 10 −25 cm 2 /H at 250 µm assuming a spectral index β = 2 which seemed consistent with their data. The optical depth of dust derived in our study in the low N tot H regions at |b II | > 10 • is shown in Fig. 7. The Figure also shows the reference value by Boulanger et al. (1996) which is in good agreement with the values derived here, interpolated at 250 µm (in fact 10% above when using β = 1.8 and 6% above when using β = 1.75). Our study does not allow us to measure the emissivity in the molecular gas, since we are only sensitive to the product of this emissivity with the X CO factor. However, we note that our derived average X CO = 2.54 × 10 20 H 2 cm −2 /(K km s −1 ) is significantly higher than previously derived values. Even if we account for the pos- Fig. 7. Dust optical depth derived from this study using the IRAS and Planck-HFI frequencies. The square symbol shows the emissivity at 250 µm derived by Boulanger et al. (1996). The dash and dash-dot lines show a power law emissivity with λ −1.8 and λ −1.75 respectively, normalized to the data at 100 µm. The error bars shown are ±1σ. sible uncertainty in the calibration of the 12 CO(J=1→0) emission (24%) discussed in Sec. 2.2.2, increasing the CO emission by the corresponding factor would only lower our X CO estimate to X CO = 2.2 H 2 cm −2 /(K km s −1 ). In comparison, a value of (1.8 ± 0.3) × 10 20 H 2 cm −2 /(K km s −1 ) was found at |b II | > 5 • from the comparison of the H i, CO, and IRAS 100 µm maps (Dame et al. 2001). Similarly, values derived from γ-ray FERMI data can be as low as X CO = 0.87 × 10 20 H 2 cm −2 /(K km s −1 ) in Cepheus, Cassiopea and Polaris (Abdo et al. 2010). This could be evidence that the dust emissivity in the high-latitude molecular material could be larger than in the atomic phase by a factor 3. Such an increase in the dust emissivity in molecular regions has been inferred in previous studies (e.g. Bernard et al. 1999;Stepnik et al. 2003) and was attributed to dust aggregation. Dark Molecular gas The nature of 'dark molecular gas' has recently been investigated theoretically by Wolfire et al. (2010), who specifically address the HI/H 2 and C/C + transition at the edges of molecular clouds. The nominal cloud modeled in their study is relatively large, with total column density 1.5 × 10 22 cm −2 , so the applicability of the results to the more translucent conditions of high-Galactic-latitude clouds is not guaranteed. The envelope of the cloud has an H i column density of 1.9×10 21 cm −2 which is more typical of the entire column density measured at high latitudes. Wolfire et al. (2010) define f DG as the fraction of molecular gas that is dark, i.e. not detected by CO. In the nominal model, the chemical and photodissociation balance yields a total H 2 column density of 7.0 × 10 21 cm −2 , while the 'dark' H 2 in the transition region where CO is dissociated has a column density of 1.9 × 10 21 cm −2 . The fraction of the total gas column density that is molecular, is 93% in the nominal model, which suggestss that the line of sight through such a cloud passes through material which is almost entirely molecular. To compare the theoretical model to our observational results, we must put them into the same units. We define the dark gas fraction as the fraction of the total gas column density that is dark, For the nominal Wolfire et al. (2010) model, f DG =0.29 so we can infer f DARK =0.27. The smaller clouds in Figure 11 of their paper have larger f DG , but f (H 2 ) is also probably smaller (not given in the paper) so we cannot yet definitively match the model and observations. These model calculations are in general agreement with our observational results, in that a significant fraction of the molecular gas can be in CO-dissociated 'dark' layers. If we assume that all dark molecular gas in the solar neighbourhood is evenly distributed to the observed CO clouds, the average f DG measured is in the range f DG = 1.06 − 1.22. This is more than three times larger than predicted by the Wolfire et al. (2010) mass fraction. This may indicate that molecular clouds less massive than the ones assumed in the model actually have a dark gas mass fraction higher by a factor of about three. This would contradict their conclusion that the dark mass fraction does not depend on the total cloud mass. The location of the H i-to-H 2 transition measured here (A DG V 0.4 mag) is comparable, although slightly higher than that predicted in the Wolfire et al. (2010) model (A DG V 0.2 mag). Again, this difference may indicate variations with the cloud size used, since UV shadowing by the cloud itself is expected to be less efficient for smaller clouds, leading to a transition deeper into the cloud. Other possible origins The observed departure from linearity between τ D and the observable gas column density could also in principle be caused by variations of the dust/gas ratio (D/G). However, such variations with amplitude of 30% in the solar neighbourhood and a systematic trend for a higher D/G ratio in denser regions would be difficult to explain over such a small volume and in the presence of widespread enrichment by star formation. However, the fact that the dark gas is also seen in the γ-ray with comparable amplitudes is a strong indication that it originates from the gas phase. The dark gas column-densities inferred from the γ-ray observations are also consistent with a standard D/G ratio (Grenier et al. 2005). The observed excess optical depth could also in principle be due to variations of the dust emissivity in the FIR-Submm. We expect such variations to occur if dust is in the form of aggregates with higher emissivity (e.g. Stepnik et al. 2003) in the dark gas region. We note however that such modifications of the optical properties mainly affect the FIR-submm emissivity and are not expected to modify significantly the absorption properties in the Visible. Therefore, detecting a similar departure from linearity between large-scale extinction maps and the observable gas would allow us to exclude this possibility. Sky directions where no CO is detected at the sensitivity of the CO survey used (0.3-1.2 Kkms −1 ) may actually host significant CO emission, which could be responsible for the excess dust optical depth observed. Evidences for such diffuse and weakly emitting CO gas have been reported. For instance, in their study of the large-scale molecular emission of the Taurus complex, Goldsmith et al. (2008) have found that half the mass of the complex is in regions of low column density N H < 2 × 10 21 cm −2 , seen below W CO 1 Kkms −1 . However, Barriault et al. (2010) Fig. 8. Map of the excess column density derived from the 857 GHz data. The map is shown in Galactic coordinates with the Galactic centre at the centre of the image. The grey regions correspond to those where no IRAS data are available, regions with intense CO emission (W CO > 1 Kkms −1 ) and the Galactic plane (|b II | < 5 • ). reported a poor spatial correlation between emission by diffuse CO and regions of FIR excess in two high Galactic latitude regions in the Polaris Flare and Ursa Major. The difficulty at finding the CO emission associated to dark gas is that the edges of molecular coulds tend to be highly structured spatially, which could explain why many attempts have been unsuccessful (see for instance Falgarone et al. 1991). In our case, it is possible to obtain an upper limit to the contribution of weak CO emission below the survey detection threshold, by assuming that pixels with undetected CO emission actually emit with W CO = 0.5 Kkms −1 . This is the detection limit of the survey we use at |b| > 10 • so this should be considered an upper limit to the contribution of undetected diffuse CO emission. In that case, the dark gas mass is reduced by a factor lower than 20%. This indicates that, although diffuse weak CO emission could contribute a fraction of the observed excess emission, it cannot produce the bulk of it. Finally, we recognize that the optically thin approximation used here for the H i emission may not fully account for the whole atomic gas present, even at high latitude. H i emission is subject to self absorption and N H can be underestimated from applying too high a spin temperature (T s ) while deriving column densities. T s is likely to vary from place to place depending on the relative abundance of CNM clumps (with thermodynamical temperatures of 20-100 K) and WNM clouds (at several thousand K) in the telescope beam. The effective spin temperature of 250-400 K to be applied to correct for this blending andto retrieve the total column density from the H i spectra does not vary much in the Galaxy (Dickey et al. 2003(Dickey et al. , 2009. It indicates that most of the H i mass is in the warm phase and that the relative abundance of cold and warm H i is a robust frac- Fig. 9. Fractional mass of the dark gas with respect to the neutral gas mass as a function of the lower b II value used in the analysis. The solid curve is computed under the assumption of optically thin H i, the dashed curve is for N H i H computed using T s = 80 K. Error bars are 1σ. tion across the Galaxy (outside of the inner molecular ring). The correlation between the FERMI γ-ray maps and the H i column densities derived for different spin temperatures also support an average (uniform) effective spin temperature > 250 K on and off the plane . In order to test these effects, we performed the analysis described in this paper using a very low choice for the H i spin temperature. We adopted a value of T s = 80 K when the observed H i peak temperature is below 80 K and T s = 1.1×T peak when above. Under this hypothesis, we obtained dark gas fractions which are about half of those given in Table 3 under the optically thin approximation. We consider this to indicate that significantly less than half of the detected dark gas could be dense, cold atomic gas. We further note that, under the optically thin H i hypothesis, the dark gas fraction appears very constant with Galactic latitude down to |b II | 3 • (see Sec. 6.4), while it varies more strongly using T s = 80 K. This does not support the interpretation that the bulk of the dust excess results from underestimated H i column densities. Dark-Gas variations with latitude We investigate the distribution of the dark gas as a function of Galactic latitude. This is important, since the dark gas template produced here for the solar neighbourhood is also used in directions toward the plane for Galactic inversion purpose in Planck Collaboration (2011f). We performed the calculations described in Sec. 4 for various values of the Galactic latitude lower cutoff (b min ) in the range b min < |b II | < 90 • with b min varying from 0 • to 10 • . For each value, we used the best fit parameters derived from b min = 10 • and given in Table 3. Figure 9 shows the evolution of the dark gas mass fraction with respect to the atomic gas mass as a function of b min . It can be seen that the ratio changes only mildly (increases by a factor 1.12 from b min = 10 • to b min = 2 • ) as we approach the Galactic plane. This indicates that a fairly constant fraction of the dark gas derived from the solar neighbourhood can be applied to the rest of the Galaxy. Figure 9 also shows the same quantity computed using the H i column density derived using T s = 80 K. It can be seen that, in that case, the dark gas fraction is predicted to decrease by a factor 2.12 from b min = 10 • to b min = 2 • . This is caused by the much larger inferred H i masses toward the plane under this hypothesis. We consider it unlikely that the dark gas fraction varies by such a large factor from the solar neighbourhood to the Galactic plane, and consider it more likely that the correction applied to N H by using a spin temperature as low as T s = 80 K actually strongly overestimates the H i opacity, and therefore the fraction of the dark gas belonging to atomic gas. Conclusions We used the Planck-HFI and IRAS data to determine all sky maps of the thermal dust temperature and optical depth. The temperature map traces the spatial variations of the radiation field intensity associated with star formation in the Galaxy. This type of map is very important for the detailed analysis of the dust properties and their spatial variations. We examined the correlation between the dust optical depth and gas column density as derived from H i and CO observations. These two quantities are linearly correlated below a threshold column density of N obs H < 8.0 × 10 20 Hcm −2 corresponding to A V < 0.4 mag. Below this threshold, we observed dust emissivities following a power-law with β 1.8 below λ 500 µm and flattening at longer wavelengths. Absolute emissivity values derived in the FIR are consistent with previous estimates. This linear correlation also holds at high column densities (N obs H > 5 × 10 21 Hcm −2 ) corresponding to A V = 2.5 mag where the total column density is dominated by the molecular phase for a given choice of the X CO factor. Under the assumption that the dust emissivity is the same in both phases, this leads to an estimate of the average local CO to H 2 factor of X CO = 2.54 × 10 20 H 2 cm −2 /(K km s −1 ). The optical depth in the intermediate column density range shows an excess in all photometric channels considered in this study. We interpret the excess as dust emission associated with dark gas, probably in the molecular phase where H 2 survives photodissociation, while the CO molecule does not. In the solar neighbourhood, the derived mass of the dark gas, assuming the same dust emissivity as in the H i phase is found to correspond to 28%% of the atomic mass and 118%% of the molecular gas mass. The comparison of this value with the recent calculations for dark molecular gas around clouds more massive than the ones present in the solar neighbourhood indicates a dark gas fraction about three times larger in the solar neighbourhood. The threshold for the onset of the dark gas transition is found to be 0.4 m and appears compatible to, although slightly larger than, the thresholds predicted by this model. Finally, we stress that the H i 21 cm line is unlikely to be optically thin and to measure all the atomic gas. Therefore, the dark gas detected here could well represent a mixture of dark molecular and dark atomic gas seen through its dust emission. For an average H i spin temperature of 80 K, the mixture is predicted to be 50% atomic and 50% molecular.
2011-01-11T18:41:16.000Z
2011-01-11T00:00:00.000
{ "year": 2011, "sha1": "26339bc3b7c8d4a29c3d57c4296c6fdfb88db3b3", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2011/12/aa16479-11.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "26339bc3b7c8d4a29c3d57c4296c6fdfb88db3b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
42437325
pes2o/s2orc
v3-fos-license
Active immunity and T-cell populations in pigs intraperitoneally inoculated with baculovirus-expressed transmissible gastroenteritis virus structural proteins The intraperitoneal inoculation of pigs with baculovirus-expressed transmissible gastroenteritis virus (TGEV) structural proteins (S, N, M) in conjunction with thermolabile Escherichia coli mutant toxin (LT-R192G) in incomplete Freund’s adjuvant (IFA) was tested in an attempt to elicit active immunity to TGEV in gut-associated lymphoid tissues (GALT). Four groups of 63 (1–5-week-old) suckling, TGEV-seronegative pigs were used to assess the efficacy of the recombinant protein vaccine (group 3) in comparison with sham (group 1), commercial vaccine (group 2), and virulent TGEV Miller-strain-inoculated pigs (group 4). The TGEV-specific mucosal and systemic immune responses were measured after in vivo and in vitro stimulation with TGEV-antigens. The major T-cell subset distribution was analyzed in vivo and in vitro after stimulation of mononuclear cells with TGEV (from mesenteric lymph nodes of group 3 inoculated with TGEV-recombinant proteins). Induction of active immunity was assessed by challenge of pigs with virulent TGEV at 27 days of age. Baculovirus-expressed TGEV proteins coadministered with LT-R192G in IFA induced mesenteric lymph node immune responses associated with IgA-antibodies to TGEV and partial protection against TGEV-challenge. The high titers of serum IgG- and virus-neutralizing-antibodies to TGEV in group 3 pigs most likely reflected the dose of TGEV S-protein administered. At the day of TGEV-challenge, the in vitro stimulation of mononuclear cells from the mesenteric lymph nodes of group 3 pigs with inactivated TGEV resulted in an increase in double positive (CD4+CD8+), natural killer (CD2+CD4−CD8+dim) and cytotoxic (CD2+CD4−CD8+bright) T-cell phenotypes, accompanied by increased expression of interleukin-2 receptor and a decrease of the null (CD2−CD4−CD8−/SW6+) cell phenotype. Introduction Transmissible gastroenteritis (TGE) is a devastating viral disease of pigs under 2 weeks of age, characterized by a mortality rate of up to 100%. Transmissible gastroenteritis virus (TGEV) remains enzootic in Europe, the USA and other countries with an intensive pork industry (Wesley et al., 1997;Paton et al., 1998). There are currently no effective TGEV vaccines which emphasizes the importance of developing new prevention and control methods for this pathogen. In other species, the intraperitoneal administration (IP) of antigens elicited peritoneal B-cells which were able to repopulate other compartments of the common mucosal immune system including the lamina propria with IgA-antibody secreting cells (Wu and Russell, 1994). In preweaning and postweaning pigs the IP immunization with an Escherichia coli killed vaccine emulsified in an equal volume of Freund's incomplete adjuvant reduced the mortality rate among vaccinated pigs by half that of unvaccinated controls (Husband and Seaman, 1979). Protective active immunity to TGEV in lactating sows is associated with stimulation of secretory IgA (SIgA) inductive sites in the GALT and the proposed trafficking of effector IgA antibody secreting cells to the intestine and mammary glands (Saif, 1996). In our study, we examined the ability of the three major structural TGEV proteins (S = spike glycoprotein, N = nucleocapsid phosphoprotein and M = membrane glycoprotein) previously cloned and expressed in a baculovirus expression system to induce protective active immunity in pigs against challenge with TGEV Miller-strain in conjunction with the thermolabile E. coli mutant toxin (LT-R192G). The native TGEV Sglycoprotein (220 kDa) is the major structural TGEV protein capable of induction of virus-neutralizing (VN) antibodies and is responsible for virus attachment . For our experimental vaccine, the baculovirus-expressed TGEV S-recombinant (R2-2) which is truncated (150 kDa) but contains the four major antigenic sites (A, B, C, D) and induces VN-antibodies was used (Shoup et al., 1997;Park et al., 1998). In addition, a baculovirus-expressed N-phosphoprotein (47 kDa) and M-glycoprotein (29± 36 kDa) were included because of their involvement in T-helper cell stimulation (Anton et al., 1995) and their effects on -interferon production , respectively. Although several non-toxic LT-mutants and recombinants have been prepared with partial knockout of the ADP-ribosyltransferase activity (O'Neal et al., 1998;Giuliani et al., 1998), exploration of the mucosal adjuvanticity of such enzymatically inactive LT forms has not been adequately addressed in the context of their potential to stimulate IgA responses in farm animals. In our study, in order to increase the immunogenicity of IP-delivered TGEV protein vaccines, two adjuvants were included: LT-R192G and IFA. Qualitative characteristics of B-cell responses, including the TGEV-specific B-memory cells from mucosal and systemic lymphoid tissues were examined for pigs inoculated IP with baculovirusexpressed TGEV proteins and compared with responses in pigs given a commercial TGEV vaccine, sham inocula or virulent TGEV. Also the changes in numbers of several T-cell phenotypes were measured at 27 days of age (day of challenge) by flow cytometry (CD4+CD8+ i.e., double positive T-cells, CD2+CD4ÀCD8+dim i.e. NK-cells, CD2+CD4ÀCD8+bright i.e. cytotoxic T-cells, CD2ÀCD4ÀCD8À i.e., null-cells and CD25+ i.e., memory T-cells) after the in vivo or in vitro stimulation of mononuclear cells from mesenteric lymph nodes of pigs inoculated with TGEV recombinant proteins. The TGEV-shedding in feces, TGE clinical symptoms and the presence of SIgA were measured after challenge of all pig-groups with virulent TGEV at 27 days of age. Inocula A commercial killed TGEV vaccine licensed for IP use (TG-Emune with Imugen II, lot number 54023, Oxford Veterinary Laboratories, Worthington, MN) was purchased. Genes coding for TGEV S-, N-, and M-proteins (Shoup et al., 1997;Pulford and Britton, 1991a;Baudoux et al., 1995) were cloned and expressed in a baculovirus-expression system as described (Shoup et al., 1997). The amounts of TGEV-specific proteins contained within the baculovirus protein lysates were determined by the use of continuous-elution electrophoresis (Kyd et al., 1994). Briefly, this involved applying 3.5 mg of solubilized (total) lysate protein to a manifold-tube of the BioRad Mini Prep Cell (BioRad, Hercules, CA) containing a 6±10% separating and 4% stacking polyacrylamide gel based on the size of TGEV protein to be separated (S ± 6%, N and M ± 10%). Protein mixtures were fractionated based on their size at constant voltage (200 V) with a vacuum pump attached to collect the individual protein-fractions. The TGEV-specificity of individual fractions was tested by Western blot (Stott, 1989) using hyperimmune serum to TGEV Miller strain. The yield of TGEV S, N and M proteins was 0.15, 0.5 and 0.4 mg/ml, respectively, of the original (7 mg/ml) protein concentration which corresponded to 2, 7 and 6% of TGEV S, N and M proteins contained within the baculovirus lysates. The dose of TGEV recombinant proteins administered per individual pig was determined based on a doseresponse pilot experiment with age-matched pigs and the S-protein, as the dose eliciting the highest TGEV-seroconversion measured by a VN-test (50 mg). Because of the high amount of TGEV-proteins needed for double inoculation of 16 group 3 pigs and relatively high expression levels (2±7%), the baculovirus protein lysates were IP-administered directly without the above semipurification step. The dose of other TGEV (N, M) and wild type baculovirus (ACNPV) proteins used was the same as that for the TGEV Sprotein (50 mg). Prior to IP-administration, the TGEV-specificity of all baculovirusexpressed protein lysates was tested by Western blot (Stott, 1989) using hyperimmune serum to TGEV Miller-strain. As an adjuvant, the thermolabile E. coli mutant toxin LT-R192G (which has a glycine instead of an arginine at position 192 of the protein rendering it nontoxic at adjuvant-effective doses) was used (courtesy of Dr. J.D. Clements, Tulane University). The second dose-response pilot experiment was performed with the LT-R192G adjuvant in order to assess its potential toxicity. In our suckling conventional pigs, none of the LT-R192G doses administered IP, ranging from 1±25 mg/ pig, resulted in diarrhea. The adjuvant dose used in our study was 10 mg/pig. The second adjuvant used was incomplete Freund's adjuvant (IFA, Gibco, Grand Island, NY). IFA was mixed with an equal volume of TGEV-proteins containing the LT-R192G (2.5±5 ml of total volumes). Pig vaccinations and challenge Four groups of TGEV-seronegative conventional (Large White Landrace) pigs (n = 63) were inoculated IP at 7±9 and 15±17 days of age as follows. Group 1 (n = 29) consisted of four subgroups of negative control pigs inoculated as follows: 11 pigs were inoculated with wild type baculovirus (ACNPV) protein (50 mg/pig) in IFA; six pigs were inoculated with sterile phosphate saline buffer (PBS, pH 7.3) in IFA; six pigs were inoculated with LT-R192G (10 mg/pig) in PBS (pH 7.3) and six pigs were not inoculated. Group 2 (n = 12) pigs were inoculated with a commercial killed TGEV-vaccine licensed for IP use, according to the manufacturer's instructions (Oxford Veterinary Laboratories) at the above time points. Group 3 (n = 16) pigs were inoculated at both 7±9 and 15±17 days of age with TGEV S, N and M proteins (50 mg of each/pig) together with LT-R192G (10 mg/ pig) in IFA. Group 4 (n = 6) pigs served as positive controls and were oronasally infected at 11 days of age with 1  10 5 plaque forming unit (PFU) of virulent TGEV Miller strain. All of the group 4 pigs become infected, exhibited severe diarrhea and shed the TGEV in feces as measured at 15 days of age. All pigs were weaned at 24 days of age and challenged oronasally with 1  10 6 PFU of virulent TGEV Miller-strain at 27 days of age (post-challenge day, PCD 0). Clinical signs, collection of rectal swabs and TGEV-antigen detection in feces Clinical signs of TGE (0±5 scale) including dehydration, anorexia and lethargy were recorded daily between PCD 0 and 10. Clinical scores were as follows: 5 = moribund or dead; 4 = very weak, still able to stand; 3 = marked dehydration; 2 = moderate dehydration; 1 = mild dehydration; 0 = normal. Diarrhea scores were recorded as follows: 4 = watery, 3 = semi-liquid, 2 = mucoid, 1 = pasty, 0 = normal. The value for the clinical score average was calculated as the average of the clinical and diarrhea scores for each group as described (Sestak et al., 1996). Pigs were rectally swabbed at 2-day intervals up to PCD 12 and the presence of TGEV-antigens was measured by doubleantibody sandwich ELISA as described (Sestak et al., 1996). The ELISA cut-off absorbance was calculated based on the mean ELISA absorbance of a population of known negative samples plus three times its standard deviation. Serum samples and serum antibody assays At the time of the first (7±9 days of age) and second (15±17 days of age) IPinoculations, at PCD 0 (27 days of age) and PCD 6±7, the pigs were bled and serum was harvested and stored as described (Sestak et al., 1996). To detect TGEV-specific IgG and IgA antibodies in the serum of inoculated pigs, a monoclonal antibody-capture ELISA was used, as described (Park et al., 1998). The ELISA cut-off absorbance was established based on a population of negative sera as the ELISA mean absorbance plus three times the standard deviation. To detect the corresponding VN-antibodies, a plaque-reduction test was performed as described (Welch and Saif, 1988). The VN titer was expressed as the reciprocal of the highest serum dilution that neutralized cytopathic effects after 48 h at 37 8 C. Enzyme-linked immunospot assay (ELISPOT) At PCD 0 and PCD 6±7, ileum lamina propria (I), mesenteric lymph nodes (MLN) and peripheral blood (PBL) lymphoid tissues were collected from 3±6 pigs of each group (except group 4 where only two pigs were available at each time point due to severe TGE prior to PCD 0 associated with 33% mortality) and the mononuclear cells (MNCs) were isolated as described Yuan et al., 1996). Briefly, ELISPOT was performed by using a coating antigen of acetone-fixed TGEV Miller-strain infected swine testis cell monolayers grown in 96-well cell culture plates. For the test, 100 ml of a MNC suspension (I, MLN, PBL) was added to each duplicate well at 3 cell concentrations (5  10 5 , 5  10 4 , 5  10 3 MNC/well) and plates were incubated for 6 h at 37 8 C in a 5% CO 2 atmosphere. After incubation, MNCs were removed by washing the plates six times in 20 mM Tris±HCL (pH 8.0) containing 0.15 M NaCl and 0.1% (v/v) Tween 20. Next, 100 ml of biotinylated MAB to porcine IgA or IgG (MAB 6D11 and 3H7, respectively) (Paul et al., 1989) in concentrations of 1 and 0.4 mg/ml, respectively, were added to each well for 2 h at 20±22 8 C. The dark-blue spots resulting from reactions between the TGEVantigens and the antibodies secreted by the MNC were developed, using horseradish peroxidase-conjugated streptavidin (Streptavidin POD conjugate; Boehringer Mannheim, Indianapolis, IN) and tetramethylbenzidine chromogenic substrate (TMB; Kirkegaard & Perry Laboratories, Gaithersburg, MD). Numbers of antibody secreting cells (ASC) were counted as described by VanCott et al. (1994). The ELISPOT was performed for MNCs isolated on the day of euthanasia (PCD 0 and 6±7), which reflected the in vivo immune responses, or after 4±5 days of incubation with 28 mg/ml of inactivated TGEV-Miller strain which reflected the in vitro responses of memory B-cells (MLN, PBL only). In vitro memory responses for ileal lymphoid tissues were not evaluated due to difficulties with maintenance of sterility in these cultures. The ELISPOT counts of ASC restimulated in vitro by TGEV antigens indicated the presence of TGEVspecific IgA or IgG memory B-cells in MLNs and PBLs of the vaccinated/infected pigs. Flow cytometry (FCM) Lymphoid cells isolated at PCD 0 from MLN of group 3 pigs (n = 4) were used for FCM analysis in order to measure T-cell phenotype ratios before (in vivo) and after in vitro stimulation of these cells with inactivated TGEV. Differences in distribution of Tcell phenotypes in vivo compared to in vitro immune responses in a single-color (CD25 and SWC6) or tri-color (CD2CD4CD8) setup were analyzed by the use of an indirect staining method. The MABs used as a primary reagent to stain the porcine lymphoid cell surface molecules were CD2, CD4, CD8, interleukin-2 receptor (IL-2-R/CD25) (VMRD, Pullman, WA: MSA4, 74-12-4, PT36B and PGBL25A, respectively) and porcine null cells (SWC6) (Serotec, Raleigh, NC: MCA1448). The secondary reagents were antimouse-IgG2a-Fluorescein isothiocyanate (FITC), anti-mouse-IgG2b-Tri-Color (TC), anti-mouse-IgG1-R-Phycoerytherin (PE) (Caltag, San Francisco, CA) and anti-rat-IgG-FITC (Caltag, San Francisco, CA). Isotype controls were mouse IgG2a, mouse IgG2b, mouse IgG1 and rat IgG (Caltag, San Francisco, CA). The MNCs were resuspended in buffer (PAB) composed of PBS, pH 7.3 containing 2 g/l of sodium azide and 2 g/l of bovine albumin at a concentration of 1  10 7 MNC per ml. Indirect staining for FCM was performed in U-bottom 96-well plates (Dynex, Chantilly, VA) where 100 ml of MAB diluted in PAB was applied, followed by 1  10 6 of MNC (100 ml). The MNC were incubated with MABs at 37 o C for 30±60 min. After incubation with primary reagent, the MNC were centrifuged in 96-well plates at 435  g for 5 min, supernatants were aspirated, cell pellets resuspended in 100 ml of PAB buffer and washed 2 more times to completely remove the unbound primary reagent. After three washes, the secondary, labeled reagents (100 ml per well) were added to 100 ml of MNC suspension and incubated at 37 o C for 30±60 min. After staining the MNC with secondary reagents, three washes were performed as described above and MNC were transferred to the labeled 12 mm  75 mm glass tubes (Fisher, Itasca, IL) containing 100 ml of 2% paraformaldehyde in PBS, pH 7.3, followed immediately by thorough mixing. Each tube was capped with parafilm and refrigerated in the dark, on a shaker (40 rpm/min) for up to 1 month, then tested by FCM. Each set of samples contained the unstained MNC controls, secondary antibody only-stained controls and isotype controls in order to assess the cut-off between the stained and unstained cell populations. Each FCMmeasurement included approximately 100,000 events (Coulter Epics Elite; Coulter Corporation, Miami, FL). Statistical analysis The Mann Whitney nonparametric test was used to evaluate the differences (p < 0.05) between each group in TGEV VN-titer, TGEV IgG-and IgA-ELISA-titer at PCD 0, the ELISPOT for TGEV IgG-and IgA-ASC numbers at PCD 0 and 6±7, and in vivo versus in vitro T-cell numbers as detected in group 3 MLN tissues at PCD 0. Nature of IP-administered TGEV-proteins The TGEV-specificity of the baculovirus-expressed proteins was confirmed by Western blot prior to their IP-inoculation into pigs (Fig. 1). The S, N and M-protein expression levels in baculovirus (2, 7 and 6%, respectively) were in accord with published data on the efficiency of the baculovirus expression system (Luckow and Summers, 1988). Serum antibody responses Serum antibody responses were detected by monoclonal antibody-capture ELISA and VN tests (Table 1). At the day of the second IP-inoculation (PCD-10) no detectable TGEV IgG-antibodies (<20) or VN-antibodies (<4) were found in any of the groups (Table 1). Only TGEV IgG-antibodies and no TGEV IgA-antibodies were detected in the serum of the pigs except group 4 by PCD 0. At PCD 0 only the group inoculated with TGEV proteins (group 3) and group 4 infected at 11-days of age with virulent TGEV showed both IgG and VN antibody to TGEV (Table 1). No ELISA and very low or no VN-antibody titers were detected in group 2 previously inoculated with the TGEV commercial vaccine and the negative control group 1. The difference in geometric mean titers (GMTs) between groups 1 and 2 compared to groups 3 and 4 at PCD 0 was significant (p < 0.05) as detected by both ELISA and VN tests (Table 1). At PCD 6±7 all groups seroconverted to TGEV with the highest ELISA IgG and VN antibodies in group 3. At PCD 0 group 3 had the highest ELISA IgG and VN antibodies among the four groups, but not significantly higher than group 4 at PCD 0, (p > 0.05), reflecting the response elicited by the vaccine containing the baculovirus-expressed S-protein. No significant serum antibody differences were observed among the 4 subgroups of control group 1; therefore, the serum antibody and other results were presented as overall group 1 means. Fig. 1. Three baculovirus-expressed TGEV major structural proteins (spike ± S, nucleocapsid ± N, membrane ± M) were tested by Western blot and TGEV Miller-strain hyperimmune serum to confirm their TGEV-specificity prior to IP-administration to group 3 pigs. A: 1 ± BioRad low range protein standard; 2 ± protein lysate containing the wild type baculovirus ACNPV; 3 ± protein lysate containing the purified TGEV Miller-strain: * indicates three major structural proteins in order S = 220 kDa, N = 47 kDa, M = 28±31 kDa; 4 ± protein lysate containing baculovirus-expressed TGEV S-protein (clone R2-2 was truncated to 150 kDa); 5 ± protein lysate containing the baculovirus-expressed TGEV M-protein; 6 ± BioRad high range protein standard. B: 1 ± BioRad low range protein standard; 2 ± protein lysate containing the baculovirus-expressed TGEV N-protein; 3 ± BioRad high range protein standard. Table 1 Geometric mean antibody titers (GMT) against TGEV and 95% confidence (upper limit) (189) 1138 (6420) 930 (7803) a Statistical evaluation of differences among the 4 groups (Mann Whitney nonparametric test, p < 0.05) was applied at PCD 0. b Designated group GMT higher than group 1. c Designated group GMT higher than groups 1 and 2. TGEV antibody secreting cells (ASC) By measuring the TGEV IgA-and IgG-ASCs from lymphoid tissues (Fig. 2) which are part of the mucosal immune system (I, MLN) compared to the systemic immune system (PBL), more consistent results were obtained with ASCs restimulated in vitro with inactivated TGEV (memory B-cells) than with ASC representing the in vivo antibody responses. In all groups tested at PCD 0 and 6±7 for I, MLN and PBL tissues, the TGEV IgG-ASCs were generally detected in equal or higher numbers (Fig. 2A±E) than IgA-ASCs. At PCD 0 few significant differences were detected for in vivo ASC numbers except for group 4 ileum in which low numbers of IgA and IgG ASC were detected in contrast to no ASC in ileum of groups 1±3 ( Fig. 2A). For group 4 MLN (PCD 0 and 6±7) and groups 3 and 4 PBL (PCD 0), both IgA and IgG-ASC numbers were above the ASC numbers of groups 1 and 2 (p < 0.05, Fig. 2B and C). At PCD 6±7 in vivo IgG-ASC numbers in all 3 tissues (I, MLN, PBL) of groups 2±4 were greater than the group 1 IgG-ASC numbers. The IgG-ASC numbers in the MLN of groups 3 and 4 were greater than those for groups 1 and 2 (p < 0.05, Fig. 2A, B and C). After measuring the ASCs from in vitro stimulated (inactivated TGEV) MLN and PBL tissues, we found that at PCD 0 the IgG-ASCs were higher in groups 3 and 4 (p < 0.05) than in groups 1 and 2 ( Fig. 2D and E) and IgA-ASCs were higher in MLN tissues of groups 3 and 4 (p < 0.05) than in groups 1 and 2 (Fig. 2D). After TGEV challenge (PCD 6±7), the vitro stimulated MNC from the MLN and PBL tissues showed no significant differences (IgG-ASC) among groups 2±4; however, the IgA-ASCs in MLN tissues of group 3 were significantly higher than groups 1 and 2 IgA-ASC numbers (p < 0.05), but not significantly different compared to group 4 (Fig. 2D). For the PBL tissues stimulated in vitro at PCD 6±7, there were no significant differences among groups 2±4 (all were significantly elevated above group 1) for IgA and IgG ASC numbers (Fig. 2E). Flow cytometry analysis From four pigs in group 3 (previously inoculated with the S, N and M TGEV structural proteins containing the LT-R192G toxin in IFA), MNC isolated from the MLN tissues were tested by flow cytometry after in vivo and in vitro (stimulation with inactivated TGEV) immune responses. By measuring the percentages of selected T-cell phenotypes: CD2+CD4+CD8À, CD2+CD4+CD8+, CD25+, CD2+CD4ÀCD8+bright, CD2+CD4ÀCD8+dim and CD2ÀCD4ÀCD8À (corresponding to T-helper, double positive, antigen-activated, cytotoxic T, NK and null cells, respectively), several differences in distributions of these phenotypes were observed after comparing in vivo and in vitro responses ( Fig. 3A and B). After 4±5 days of stimulation of MNC with inactivated TGEV, a significant (p < 0.05) decrease in the CD2ÀCD4ÀCD8À phenotype and an increase in CD25+ (IL-2-R) expression occurred (Fig. 3). At the same time (PCD 0), increased numbers of CD2+CD4+CD8+, CD2+CD4ÀCD8+bright and CD2+CD4ÀCD8+dim cell phenotypes were observed; however these increases were not significant (Fig. 3). The MLN tissues collected from group 3 pigs (in vivo response) at PCD 0 were additionally tested for expression of exclusive porcine null cell marker Fig. 2. TGEV-specific IgG and IgA antibody secreting cells (ASC) as detected at PCD 0 and 6±7 from 2 to 8 pigs per time point of groups 1±4. A, B, C ± in vivo ASC numbers for ileum lamina propria, mesenteric lymph node and peripheral blood mononuclear cells (MNC); D, E ± in vitro memory ASC numbers (after 4±5 days of in vitro MNC stimulation with TGEV inactivated antigens) for mesenteric lymph node and peripheral blood lymphoid tissues. Differences in group ASC numbers were measured at PCD 0 and 6±7: * above the bars indicate significantly elevated (p < 0.05) numbers of ASC above the group 1 (negative controls) ASC numbers or ** above the group 1 and 2 (pigs inoculated with commercial TGEV-vaccine) ASC numbers. (SW6) and 33+7% of MNC were positive (data not shown) compared to 44AE9% of MNC positive for the CD2ÀCD4ÀCD8À phenotype. TGE symptoms, TGEV shedding Progression of clinical TGE-symptoms such as diarrhea, dehydration, anorexia, lethargy and shedding of TGEV antigen in feces of challenged pigs was measured from PCD 0 to PCD 9 in all groups ( Fig. 4A and B), except for group 4 where due to 33% mortality associated with the first TGEV inoculation, lower numbers of pigs were available and TGE-symptoms were measured in this group only up to PCD 5. The most severe TGE-symptoms were observed in the TGEV seronegative control group 1 which did not possess any immunity to TGEV (Fig. 4A). Although the group 4 (positive controls previously inoculated with virulent TGEV) showed a high degree of active immunity to TGEV at PCD 0 associated with lack of TGEV shedding, this group continued to be stunted and still exhibited mild dehydration (result of initial TGEV infection at 11 days of age) compared to age-matched pigs (group 1±3). This impacted the group 4 TGE clinical scores measured after challenge at 27 days of age (Fig. 4A). Groups 2±3 (inoculated with commercial or recombinant TGE-vaccines) showed similar patterns of clinical signs which were completely ameliorated between PCD 6 and 9 (Fig. 4A). TGEV shedding in feces better reflected the degree of active local immunity to TGEV (as detected by ELISPOT in I-and MLN-lymphoid tissues). At PCD 1, no shedding of TGEV-antigen was detected in group 4 pigs, followed by 38% shedding in group 3 pigs, 86% shedding in group 2 pigs and 100% shedding in the negative control group 1 pigs (Fig. 4B). . Percentages (%) of porcine T-cell phenotypes identified in mesenteric lymph node tissues of group 3 pigs at PCD 0. A ± in vivo immune responses correspond to directly isolated, stained MNC tested in flow cytometry. B ± in vitro memory immune responses correspond to MNC stimulated 4±5 days in vitro with inactivated TGEV and then stained and tested in flow cytometry: * indicates significantly elevated (p < 0.05) expression of IL-2-R as determined for in vitro memory response versus in vivo response or ** indicates significantly decreased (p < 0.05) numbers of CD2ÀCD4ÀCD8À phenotype as determined for in vitro memory response versus in vivo response. Fig. 4. Progression of TGE clinical symptoms (A) and TGEV shedding in rectal swabs (B) as tested for the 4 groups of pigs (group 1 = negative controls; group 2 = pigs IP-immunized with TG-Emune commercial vaccine; group 3 = pigs IP-immunized with baculovirus-expressed TGEV S, M and N proteins in IFA containing the LTtoxin of E. coli; group 4 = pigs exposed (at 11 days of age) to virulent TGEV), after challenge exposure with virulent (1  10 6 PFU/pig) TGEV at 27 days of age (PCD 0). TGE symptoms were expressed as group mean + standard error of the mean; TGEV shedding as % of pigs shedding the virus in rectal swabs/total number of inoculated pigs within a group. Discussion During this decade emphasis has been on construction of TGEV protein subunit vaccines. Several systems were used to express the TGEV S, M, and N proteins such as E. coli, salmonella, adenovirus, vaccinia virus and baculovirus (Britton et al., 1987;Pulford and Britton, 1991b;Godet et al., 1991;Enjuanes et al., 1992;Tuboly et al., 1995;Torres et al., 1995Torres et al., , 1996Smerdou et al., 1996;Shoup et al., 1997). Although some protective antibodies were induced in inoculated animals, the protection was only partial (Torres et al., 1995), not reported (Smerdou et al., 1996) or VN-antibodies were mostly IgG (Saif and Wesley, 1992;Shoup et al., 1997;Park et al., 1998). Moreover, TGEV-immunogens expressed in procaryotes were not glycosylated, soluble or did not induce VN-antibodies (Saif and Wesley, 1992). Human adenovirus vectors were reported to undergo an abortive replication in the porcine gut and loose the TGEV (S) inserts (Torres et al., 1996). The S protein expressed in the baculovirus expression system induced VN antibodies to TGEV in the serum of rats and pigs after parenteral application (Tuboly et al., 1995;Shoup et al., 1997). However, these serum VN-antibodies were not protective (Godet et al., 1991;Tuboly et al., 1995;Shoup et al., 1997). Only IgG antibodies to TGEV were detected in sow's colostral and milk whey after a baculovirus-expressed S protein was administered with IFA intramammarily (IMM) and intramuscularly (IM) to TGEV seronegative pregnant sows (Shoup et al., 1997). Moreover, there was no significant impact on litter morbidity or mortality after TGEV challenge exposure of these litters (Shoup et al., 1997). The use of additional adjuvants, immunomodulators and routes of administration remains to be explored to try to increase the immunogenicity of TGEV protein subunits. The adjuvant activity of bacterial toxins such as Vibrio cholerae toxin (CT) and heatlabile enterotoxin (LT) from enterotoxigenic E. coli has been reported using the mouse model (McGhee et al., 1992;Katz et al., 1997). Both CT and LT enhanced mucosal IgA responses, T-cell proliferation, IL-2 production and extended immunological memory (McGhee et al., 1992;Katz et al., 1997). The ability to target IgA inductive sites makes LT and CT ideal mucosal adjuvants (if the toxic effects can be suppressed) after coupling or coadministration with other antigens, even poor immunogens (Wu and Russell, 1994). Several non-toxic LT mutants and recombinants have been prepared with partial knockout of their ADP-ribosyltransferase activity (which is associated with enhanced cyclic AMP formation in the affected intestinal cells) (Giuliani et al., 1998). As a nontoxic antigen carrier system, the LT-B subunit has been coexpressed as a fusion protein with TGEV-S in a Samonella expression system and some TGEV neutralizing antibodies were induced in a mouse model after oral administration (Enjuanes and Van der Zeijst, 1995). However, further exploration of the mucosal adjuvanticity of enzymatically inactive forms of LT (mutants or recombinants) needs to be addressed in the context of its potential to stimulate IgA immune responses and protective immunity. The IP route of immunization with CT conjugated to bacterial antigens has attracted attention recently in mouse models because peritoneal B-cells are able to repopulate other compartments of the common mucosal immune system, including the intestinal lamina propria with IgA-ASC (Wu and Russell, 1994). Moreover the feasibility of accessing the GALT via the serosal surface after IP administration of antigens was suggested in general for mucosal vaccine formulations (Husband, 1993). The IP immunization of preweaning and postweaning pigs with a killed E. coli vaccine reduced the mortality rate among vaccinated pigs by half that of unvaccinated controls (Husband and Seaman, 1979). Thus, our hypothesis was that IP inoculation of pigs with recombinant TGEV structural proteins combined with recombinant LT may be used to deliver these vaccines to GALT and elicit intestinal IgA antibodies and protection to TGEV. Systemic immune responses (represented by serum IgG-antibodies to TGEV) in group 3 pigs reflected the immunogenic properties of the TGEV S-protein administered IP. IgAantibodies to TGEV were not detected in the serum of any group at PCD 0 except low titers in group 4 pigs, reflecting use of a non-replicating recombinant protein vaccine or killed TGEV administered IP. Coadministration of TGEV-proteins with LT-R192G resulted in local production of IgA-ASC as detected by ELISPOT at PCD 0 in the MLN (but not ileum) of group 3 pigs. In our previous studies, administration of baculovirusexpressed S with or without M recombinant proteins (IM, IMM, IP) without LT-R192G resulted in only systemic (IgG) and very low local (IgA) immune responses to TGEV, or IgA was not measured (Shoup et al., 1997;Sestak et al., 1997;Park et al., 1998). In this study at PCD 0, the increase observed in both TGEV IgG-and IgA-ASCs after in vitro stimulation with inactivated TGEV-antigens (memory response) suggested that memory ASC for TGEV-specific IgA were present in groups 2±4 (MLN-tissues) at the time of challenge (PCD 0), but there was no or very little transudation of IgA into the serum as measured by ELISA (<20) after vaccination. These findings are consistent with those of others who reported serum IgA-antibodies to TGEV only during the convalescent (>PCD 7) but not during the acute (<PCD 7) stage of TGE (Kodoma et al., 1981). Other factors which most likely contributed to the lack of detectable serum IgA to TGEV in group 3 pigs prior to PCD 6±7 was the use of a nonreplicating protein vaccine and the possibility that even when IgA ASC were induced locally in GALT as demonstrated by ELISPOT, the numbers of IgA ASC were low and the efficiency of transudation of IgA into the serum was low. In previous studies, it was found that in suckling pigs passive immunity against TGEV-challenge was mostly associated with IgA (Saif and Wesley, 1992). Partial protection in our group 3 pigs (38% shed TGEV in feces at PCD 1 comparing to 100% in group 1, plus clinical TGE scores did not rise above the value of 1) most likely reflected the fact that a substantial number of TGEV IgA-precursor ASC were formed in this group based on inoculation with TGEV recombinant proteins and LT-R192G as detected by the in vitro ELISPOT at PCD 0 and 6±7. Only the group 4 pigs (inoculated with virulent TGEV prior to TGEV challenge) showed higher numbers of TGEV IgA-ASCs at PCD 0 ( Fig. 2) in ileum and MLN tissues. In contrast even after in vitro stimulation with TGEV-antigens, no or very low numbers of TGEV-specific ASCs were detected at PCD 0 in group 2 (inoculated with commercial vaccine) reflecting the poor efficacy of the commercial TGEV vaccine used in our study in inducing IgA antibody responses. The highest numbers of TGEV IgA-ASCs among the 4 groups of pigs were found in group 4 pigs which did not shed TGEV at PCD 0±5. In this group however, the previous TGEV-inoculation (at 11-days of age) resulted in marked stunting of pigs (compared to age-matched non-infected pigs), mild dehydration and a 33% mortality rate by PCD 0 when all groups were challenged with TGEV. It is important to mention, however, that evaluation of TGE-symptoms after infection of 27-day-old-pigs in this study was not as informative an indicator as it can be in future studies when recombinant TGEV proteins might be administered orally in the form of capsules to pregnant sows and TGE-symptoms then evaluated in neonatal pigs. The induction of cell-mediated immunity after TGEV infection has been investigated by macrophage migration inhibition (Frederick and Bohl, 1976), leukocyte migration inhibition (Woods, 1977), direct lymphocyte cytotoxicity (Shimizu and Shimizu, 1979), spontaneous cell-mediated cytotoxicity, antibody-dependent cell-mediated cytotoxicity (Cepica and Derbyshire, 1983), and antigen specific lymphocyte proliferation (Brim et al., 1995). Among all mammalian species studied, swine show the most diverse T-lymphocyte populations. The porcine T-cell populations are unique in that they contains several major subsets which do not exist or play only a minor role in other species. About 10±50% of porcine T-cells show the CD2ÀSW1+ phenotype and this subpopulation has been referred to as porcine null cells (Binns, 1994). Although the CD4 and CD8 antigens show a similar T-cell expression pattern compared to other mammals, i.e. T-helper cells and cytotoxic T-cells, respectively (Saalmuller and Bryant, 1994), two unusual (major) subpopulations of porcine T-cells have been identified based on coexpression of these two antigens: CD4+CD8+ and CD4ÀCD8À, i.e. double positive and double negative or null cells (Pescowitz et al., 1994;Saalmuller and Bryant, 1994). The exact function of these two enigmatic porcine T-cell subsets remains to be elucidated (Saalmuller et al., 1996) although the results from some studies suggest that porcine CD4+CD8+ T-cells exhibit properties of antigen experienced cells or memory T-cells (Zuckermann and Husmann, 1996). It was reported that the TGEV-N-phosphoprotein contains at least 3 T-helper cell epitopes suggesting the involvement of T-helper cells in TGEV antibody synthesis (Anton et al., 1995). In our study the impact of the coadministration of TGEV structural proteins with LT-R192G in IFA on the distribution of major T-cell subsets and expression of IL-2-R was measured at PCD 0 for MNC of MLN from the in vivo and in vitro (inactivated TGEV) stimulated cultures. The IL-2-R is expressed on the surface of antigen/mitogen activated, and not resting cells (Smith, 1988;Minami et al., 1993). In our study we found significantly increased IL-2-R expression on MNC from the MLN of group 3 pigs after their in vitro stimulation with inactivated TGEV antigens as measured at PCD 0. The significant increase of IL-2-R expression was accompanied by a significant decrease in the CD2ÀCD4ÀCD8À phenotype, suggesting the possibility of null cell transition into other, perhaps more functional cell types such as double-positive T-cells, NK-cells or cytotoxic T-cells. For group 3 the occurrence of double-positive, NK and cytotoxic T-cell phenotypes was increased for in vitro MLN cultures of MNC above the level corresponding to the in vivo responses but not significantly. Our data (increased occurrence of CD4+CD8+ and IL-2-R+ cells in vitro) are in agreement with investigators who suggested the involvement of T-helper cell epitopes in TGEV-antibody synthesis and that double-positive CD4+CD8+ T-cells show properties of memory T-cells (Anton et al., 1995;Zuckermann and Husmann, 1996). The positive identification of porcine CD2À null cells can be performed by detection of SWC6 antigen expression (Saalmuller and Bryant, 1994). In our group 3 pigs, 33 AE 7% of MNC from MLN were identified as SWC6+. Considering the percentages and the standard errors (CD2ÀCD4ÀCD8À phenotype: 44 AE 9% and SW6+ phenotype: 33 AE 7%), it is possible that the group 3 CD2ÀCD4ÀCD8À MNC population consisted of predominantly SWC6+ cells and the rest were other cell-types not expressing CD2, CD4 and CD8 antigens such as B-cells, etc. In the case of porcine CD8+ cells, two different levels of expression (high and low or bright and dim) had been previously described as corresponding to cytotoxic T (CD8+bright) and NK (CD8+dim) cells (Pescovitz et al., 1988;. In our study both cytotoxic T and NK cell phenotypes were identified based on bright and dim expression of the CD8 antigen. After in vitro stimulation of group 3 MNC from the MLN with inactivated TGEV antigens (PCD 0), an increased number of cytotoxic T and NK cell phenotypes suggested an involvement of these cells in immune responses after IP-inoculation of pigs with baculovirus-expressed TGEV structural proteins. This finding is in agreement with the findings of others who observed NK-and cytotoxic T-cell activity after TGEV-infection of neonatal pigs (Cepica and Derbyshire, 1984). In our study baculovirus-expressed TGEV structural proteins (S, N and M) coadministered IP with E. coli mutant LT-toxin (LT-R192G) induced MLN immune responses associated with IgA-antibodies to TGEV coinciding with reduced TGEV shedding in the feces of challenged pigs. Moreover, our results suggest that TGEV subunit vaccines based on baculovirus recombinants administered IP with LT-R192G can also actively stimulate systemic immune responses. The in vitro immune response in group 3 pigs at PCD 0 compared to the in vivo responses was associated with a significant upregulation of IL-2-R, the decreased occurrence of CD2ÀCD4ÀCD8À (null cells) phenotype (p < 0.05) and the increased occurrence of CD4+CD8+ (double positive Tcells), CD2+CD4ÀCD8+dim (NK-cells) and CD2+CD4ÀCD8+bright (cytotoxic T-cells) phenotypes (p > 0.05). Further improvements in the efficiency of this type of vaccine could be explored by the use of additional carrier systems such as Immuno-Stimulating-Complexes (ISCOMs), biodegradable microspheres, etc.
2018-04-03T05:19:39.090Z
1999-09-17T00:00:00.000
{ "year": 1999, "sha1": "64c093ee9959bf6cf8438f720c89259336498928", "oa_license": null, "oa_url": "https://doi.org/10.1016/s0165-2427(99)00074-4", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "fc4689e9a0f8c2d5711673eda0dc66be1a820fe2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236705122
pes2o/s2orc
v3-fos-license
Defining Life in African Igbo Cosmology A people’s cosmology defines their perception of the universe and their place in it. It explains their thought systems, values, attitudes as well as hierarchy of forces and their relationships. Thus African Igbo Cosmology explains the people’s perception of the universe and their place in it; their values, laws and very importantly their understanding of the purpose of existence. Igbo cosmology recognizes three ontological levels of existence in the universe where the inhabitants of these levels interact in some unique ways to give meaning to human existence. This cosmology also recognizes some elements of human existence which include life, offspring, truth, justice, wealth, love and peace as great values. But among all this values, ‘life’ stands tall as the greatest and most cherished value among the people. Undoubtedly, life is cherished in all human societies, but the value attached to life is not the same in all cultures. This paper therefore seeks to give a broad definition of the meaning and essence of life as a value among the Igbo and also show how the people’s belief system, attitudes etc are informed by their perception of life. Apart from seeking to clarify some misconceptions about the meaning and essence of life in Igbo cosmology, this work is also a premeditated effort geared towards encouraging good and moral life among the Igbo, as bad and immoral life is seen as worthless by the people. Introduction: What constitutes the true meaning and essence of life has been one of the major concerns of Philosophers right from the ancient, down to the contemporary period of philosophy. For instance, Socrates would say 'not life, but good life, is to be chiefly valued'. And Aristotle in his Philosophy of Mind would say that 'a soul is the actuality of a body that has life; where life means the capacity for self sustenance, growth and reproduction. However, in spite of the enormous interest shown over the years in studying and understanding the subject of human life, it can be said that a definitive and unequivocal agreement on the real meaning and essence of life has still not been reached, hence different conceptualizations of life among different peoples of the world. Thus studies in philosophical and social anthropology reveal that scholars are divided in their conception of what actually constitutes the meaning and essence of life. The group known as 'vitalists' (Mondin, 1999) considers life as a singular original phenomenon, irreducible to matter. Vitalists are of the view that living organisms embody the phenomenon of self-construction, self-conservation, self-regulation and self-repair, which are absent in machines and non living beings. Yet another group known as 'mechanists', asserts that all life phenomena can be completely explained in terms of the physical-chemical laws that govern the inanimate world. In other words, life is subject to physical and chemical laws that operate in the world; and these laws confer meaning to life. This view is held by some philosophers led by Rene Descartes. However, what actually constitutes the essence of life in African Igbo Cosmology is the major preoccupation of this paper. This, the paper discovers through carefully addressing the various beliefs, thought systems and practices of the Igbo that directly and indirectly point to their conception of the value and essence of life, which is at the very centre of their cosmology. Understanding African Igbo Cosmology To fully understand the meaning of Igbo cosmology, it is pertinent that we first explore the concept of cosmology. The word 'Cosmology' originated from the Greek words 'Cosmos' and 'Logos' which mean 'universe' and 'science' respectively. Put together, the word 'cosmology' means 'science of the universe'. Studies in Social anthropology show cosmology as meaning broad ideas and explanations which people have about the world in which they live and their place in that world. These include notions of other worlds, worlds from which they may believe they have come and to which they go after death in this world, or indeed during transcendental experiences in this world (Hendry, 1999:115). Expatiating further, Madu (2004:84) explains that: … cosmology focuses its search-light on the question of meaning of the world, its origin in space as well as the question of the intricate web of relationships within the cosmologic ontological hierarchies. Such questions implicate the quest for transcendental reality, a quest which is inherently a religious one. Therefore, Igbo cosmology means essentially the ways the Igbo race perceives, views and understands their universe; the lens through which they see, evaluate and appraise reality, which helps them form their values and attitudes. Kanu (2015:82) avers that 'it is the … search for the meaning of life, and an unconscious but natural tendency to arrive at a unifying base that constitutes a frame of meaning" For Uchendu (1965:11),'Igbo cosmological ideas express the basic notions underlying cultural activities and define cultural goals and social relations'. Going further, he points out that: Igbo cosmology is an explanatory device which theorizes about the origin and character of the universe. It is a guide to conduct or a system of prescriptive ethics which defines what the Igbo ought to avoid, and finally, it is an action system which reveals what the Igbo actually do as manifested in their overt and covert behavior. Igbo cosmology defines the underlying thought system that unifies the Igbo value system, their perception of life, social conduct, moral values, laws and ideas. There are divisions of Igbo cosmology. On this Agbeke Aja (2015: 154) writes: ...the Igbo, in an attempt to give account of their cosmology, end up by pointing to the sky, the earth and the world underneath. The sky, to the Igbo, is the abode of God, Chukwu, and the Spirits, the departed live in the world underneath, ala muo; white living humans devils on earth. The earth, the human world is also the abode of the lesser spirits. That is to say, the Igbo conceive three dimensions of existence people, as it were with different categories of forces or beings, some visible, some invisible. The above shows divisions that represent the structure of the African Igbo universe. Corroborating the above view, Ijiomah (2005) in Kanu (2015:84) avers that: The African Universe consists of three levels: they are the sky, the earth and the underworld. The sky is where God, Chukwu or Chineke and Angels reside; the earth is where man, animals, natural resources, some devils and some physical observable realities abode; and the underworld is where the ancestors and some bad spirits live. The African Igbo cosmos therefore, habors the spiritual and physical realms which despite their separate existence, interact at some levels to give order and unity to African Igbo existence and cosmology. This unity is underlined in the fact that the traditional Igbo have a unified picture of the universe with ideas about its origin, its structure and the nature of the various forces that occupy it. Although these ideas are largely speculative, their original impetus is nevertheless empirical. An author had rightly opined that: The Igbo world-view implies two basic beliefs (1) the unity of all things and (2) an ordered relationship among all beings in the universe. Consequently, there is a belief in the existence of order and interaction among all beings. Any disorder is the result of improper conduct on the part of any of the beings. If the cause of this is known then it can be corrected and rectified. Human survival and existence depends on a proper maintenance of this order. To safeguard and ensure this cosmic and social order, a number of prohibitions, taboos (nso-ala) and sanctions are devised and enforced through various means. The major reason for the prohibitions taboos and sanction is nothing else but to maintain and ensure continuous equilibrium of the nexus of relationships. 'Life' in African Igbo Cosmology A careful look at the Igbo world, would show that in everything, life stands out for the Igbo as the greatest value around which other values like wealth, health etc. find their meanings. It is the greatest and the most prized value which is at the centre of Igbo man's experience of the universe and which confers meaning and authenticity to his general perceptions of reality. The primacy of life in Igbo world is evidently clear in expressions like 'Ndubuisi' (life is First); 'Ndukaku' (life is greater than wealth) etc. These are not just mere appellations or ordinary expressions, they are pregnant with meanings. According to Ehusani (1996:31): Igbo world is principally anthropocentric such that human life is immensely valued, zealously and religiously guarded, hence anything that threatens life is ruthlessly death with. For instance, one who is a witch could either be killed or expelled from the community and never to return because that witchcraft is permanent threat to life. Life is at the centre of Igbo cosmology and ontology. Virtually all activities of the Igbo man are geared towards achieving one objective which consists in maintaining and augmenting life. Anything that negates or detracts from achieving this all important objective is most times considered abominable and therefore, condemned and detested. For instance, Amaegwu (2013:75) succinctly notes that 'suicide is considered most abominable crime against humanity and any person guilty of suicide is denied formal burial'. The fate of one who commits murder is no better. His is sometimes more serious because spilling human blood is the worst crime among the Igbo. The practice of euthanasia or homicide is considered abominable and highly detested among the Igbo. As far as the Igbo is concerned, euthanasia is an alien concept which is never justified by the people. Therefore, because of its importance and position in the scheme of things, life is maintained in order to be lived fully and meaningfully. Sometimes to do this, the Igbo elder starts his day with prayer (Igo Ofo). This he does with the intermediary Ofo, symbol of justice. And the centre and focus ancestors, spirits and humans for the special purpose of maintaining and preserving life, which is the ultimate value. Again, Life is conceptualized among the Igbo as meaningful only when lived among others in the human community. This is because for the Igbo, life is a shared experience. We are because others are, and our being finds meaning and expression in the being of others. This is the spirit of Igbo African communalism which is one of the defining character of the traditional Igbo. The truth of this is underscored by Ogugua (2003:7) thus: As the Igbo live in community, they value communitarian kind of life, no doubt for them, life is a communion in interaction or else why their belief that if one does a moral evil, that it affects the entire community. Conclusion: Life is highly valued among the Igbo that at its inception, it is welcomed with great joy, great rejoicing and cultural celebrations, and at its end, it is greeted with great mourning expressed in different cultural and funerary practices. Ada Mere (1973) in Ogugua (2003:11) captures the situation vividly, when she pointed out that: Traditionally children are highly valued as they have to continue the ancestral line in order to retain the family's ownership of whatever property belongs to it... on the part of the Igbo parents, having children wards off the anxiety of growing old and fear of loss of property to undeserving fellows… This explains the philosophy behind Igbo man's constant prayer against childless marriage, which most times is seen as a curse from God and from the deities. Children are a sign of immortality as one who has children never fully dies because his offerings continue his earthly existence after his death. However, it must be pointed out that not every life is cherished by the Igbo as never do wells (ndi okaliogoli) are never celebrated. The life that is celebrated and valued by Igbo is only a worthy and moral one. Murderers are themselves condemned and put to death without recourse to any court of law. Good children are seen as assets and gifts from God (Chukwu) while the recalcitrant ones are often chided for their unworthy and unacceptable lifestyle.
2021-08-03T00:06:07.756Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "be438f22f0858f5f955dafeacd782afa1b9e9905", "oa_license": null, "oa_url": "https://josha-journal.org/download_pdf/defining-life-in-african-igbo-cosmology?locale=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "56d45e08fc14ea9f92314ab98992c0e761125d79", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "History" ] }
244465084
pes2o/s2orc
v3-fos-license
Comparative effectiveness of preventive medical care when orthopedic pathologies in cows The available bibliographic sources on the study of distal limbs diseases in cows were analyzed. The causes and factors that contribute to limb pathology risk in cows at milk production enterprises are emphasized. Lack of active exercise, inadequate feeding rations, insufficient veterinary hygiene, high contamination of premises and low or insufficient level of preventive medical care have an increase in the incidence of animals up to 40% of the herd. Therefore, the problem of cows’ distal limbs pathology treatment and prevention is currently of great importance. In the complex preventive and medical care against the disease of the distal limbs in cows, functional trimming of the hoof horn has an important veterinary hygienic value. The beginning of therapy at an early stage reduces the risk of developing more severe pathologies of the limbs. The use of "EMS-I type A" for the treatment and prevention of distal limb lesions, depending on the degree and size of tissue damage, provides recovery after 2-3 bandages and disease containment in the study groups. Introduction In modern conditions of dairy farming, cattle limbs diseases cause considerable economic damage to the commercial farm units. The mechanism of mass limbs diseases, including hooves is a complex of various causes: all-year zero pasture of animals, high contamination of premises, manure gutters, pens and places of cow alley, inadequate, insufficient and unbalanced feeding, low level of preventive medical care [1,2,3]. Currently, more and more reports are appearing about problems in dairy cattle breeding associated with the occurrence of distal limbs diseases, which, of course, negatively affects the health of animals and their productivity. The increase in the incidence of the distal limbs depends on many factors: all-year zero pasture of cows, high contamination of farms, sections, boxes, pens and places of cow alley, lack of active exercise, inadequate and insufficiently balanced feeding regimes, low level of preventive medical care. The high level of morbidity in cows confirms the complexity of the multifactorial etiology of limb lesions. According to many researchers, it is noted that this problem in our country has begun to manifest itself quite acutely in recent decades with the reduction of domestic breeds of animals and the active import of Holstein cattle, which are predisposed to have a problem with limb diseases. Domestic breeds of cows were created by animal breeders in certain territories with the peculiarities of natural and climatic conditions [4,5]. Due to the spread of limb pathology in cows, all farms suffer considerable economic damage, which consists in a significant decrease in milk yield and milk quality, an increase in premature cow disposal, a decrease in reproductive capacity and an increase in the cost of treating animals [6,7]. Based on the above, the aim is to determine the level of incidence of orthopedic pathology in cattle and to conduct a comparative analysis of preventive medical care when distal limb lesions in cows. Methods and methods The research was carried out on a complex and a dairy farm, located in the Oryol region. The object of the study was cows (healthy and sick with orthopedic pathologies) of a blackand-white Holstein breed. During the clinical examination, the animals' body temperature, pulse rate, and respiration were measured. According to the principle of analogs, animals were selected in experimental groups. Two series of experiments were carried out. In the first one the therapeutic effectiveness of the drug was studied and in the second, its prophylactic effect. Spray with an antibiotic, Top-Hooves gel for hooves, oxytetracycline hydrochloride pulvis, copper sulfate pulvis, preparation "EMS-Y type A" were used for the treatment and prevention of distal limbs diseases in cows. The preparation "EMS-Y type A" is a disinfectant made in the form of a solution for external use, containing free iodine in the form of iodophor (patent No. 2535016) as an active substance and surfactants as auxiliary components that do not have an irritating effect on the skin. The composition includes crystalline iodine, oxyethylidenediphosphonic acid, dimethyl carbinol, neonol, glycerin, water. In appearance, the liquid is dark brown in color with a faint smell of iodine, mixing with water in any ratio. It has significant washable properties and a bactericidal effect against non-sporous microflora. Glycerin softens the effects of iodine on the skin. The drug is developed and mass-produced by the limited liability company "Experimental and Technological Company "Etris"". Functional processing and trimming of hooves were carried out with special discs, cutters, forceps and knives. The results of the studies were subjected to statistical processing. At the complex, three groups of dairy cows with 65-70 heads in each were examined. 20-30% of the animals in each study group were identified with obvious signs of limb lesion. These cows visually showed different degrees of lameness when moving. On the dairy farms 623 milk cows were examined. 210 animals (33.7%) were identified with obvious signs of limb lesion. Changes in the anatomical structure of the hoof, pododermatitis, interdigital phlegmon, necrotic dermatitis, laminitis, interdigital dermatitis, sole exfoliation, plantar ulcers, hoof deformity, calluses (interdigital growths) were recorded during the examination of animals ( Figure 1). Fig. 1. Clinical signs of distal limbs pathology Dairy cows, heifers on rearing are periodically passed through hoof baths filled with a 10% solution of copper sulfate, 3-5% solution of formalin and "SKIF-D" for the hoof diseases prevention. In addition, cleaning and individual treatment of hooves are carried out periodically in sick animals, with the use of disinfecting drugs and antibiotics. Despite the measures taken at the dairy complex with a yard housing in different age groups, from 20 to 45% of sick animals with distal limbs pathologies are observed. The significant number of affected hooves on the hind limbs were noted in clinical examination of the herd compared to the forelimbs. Thus, the right hind hooves are affected in 50% of animals, the left hind in 23.7 %, the right forelimbs -11.3 %, the left forelimbs -15 %. On the dairy farms during the period of functional processing and trimming of hooves, the morbidity of limbs in animals that are in winter on tie-up housing was as follows: the right forelimb is 4.8 %; the left forelimb is 2.7 %; the right hind is 48.5 %; the left hind is 44 %. Causes of the development of limbs pathology in cows at the dairy complex: 1. Lack of active exercise, which would contribute to the natural attrition of the hoof horn. 3. Insufficient level of veterinary hygiene. 4. High contamination rate in the premises where the animals are kept. 5. High level of limb injuries. 6. Low or insufficient level of preventive medical care, namely: insufficient examination of animals, untimely functional and preventive trimming of overgrown hoof horn and late treatment of the developed pathology. Before the experiments, bacteriological and mycological studies of inter-hoof cleft scrapings of sick animals were carried out. In the samples from the affected limbs, the causative agent of necrobacillosis was not isolated, but the pathogenic Staphylococcus aureus (B -hemolytic, plasma-coagulating) was isolated. And during mycological studies, fungi of the genus Aspergillus (A. niger, A. flavus), Penicillium, Mucor were found. Laboratory studies of the microflora growth retardation zone in Petri dishes showed that the isolated microorganisms have a certain sensitivity to antimicrobial drugs and "EMS-I type A" agent. In the experiment, the agent "EMS-I type A" was tested by individual treatment of sick animals. Before the treatment, the animals' hooves were thoroughly cleaned and trimmed. To clear the hooves of the animals, a set of tools was used: pliers, a hoof knife. The instrument was disinfected before use. A special machine was used to fix the animals. Cleaning hooves of cattle was carried out in the following way. First of all, the hoof was cleaned of mechanical contamination. Then with the help of hoof knife, the old horn with a yellowish color was carefully cut off from the sole and digital torus, and the overgrown hoofed wall was bitten off with pliers, leaving it at the level of the white line. Special milling cutter was used to trim the hoof horn. The damaged areas were cleaned of necrotic tissues and treated with a 3 % solution of hydrogen peroxide. For treatment, a 50% foamy solution of "EMS-I type A" was used on the cleared surface of the hooves and a gauze bandage was applied. The bandage was changed after three days. In the presence of a pathological process, the bandage was applied repeatedly. Figure 2 shows the hooves of cows before and after treatment. Currently, in veterinary practice, there are a large number of monocomponent disinfectants used as preventive medical care against the lower limbs' pathology of cattle. As preventive measure, farms use formalin baths with a 5% solution of formaldehyde, copper-vitriol baths with a 10% solution of copper sulfate, Hooves solutions and others. However, these disinfectants do not solve this problem. The most effective ones are combined disinfectants; their main advantages are absence of immunosuppressive action, low toxicity to cows, and breadth of biocidal action against potentially pathogenic microorganisms [8,9,10]. "EMS-I type A" is an important component in the integrated method of combating widespread pathology when studying the hoof diseases prevention effectiveness in the cows group passing through foot baths filled with a foamy preparation of 10% solution. The control group of animals was passed through hoof baths with a 10 % solution of copper sulfate. Preparation, use and replacement of solutions in hoof baths were carried out according to generally accepted methods (Figure 3). The cow's hooves were completely immersed in the foam solution, and the animal, coming out of the bath, carried a certain amount of foam on the limbs, which was kept on the hooves for about 10 minutes providing a longer treatment. During the period of the experiment, there was a decrease in the number of sick cows in each study group. When treated with a foam solution with a concentration of 10 % of the agent "EMC-I type A", the lameness reduction was 7 %. In the control group of animals, when treated with a 10% solution of copper sulfate, the decrease was also 7 % as in the experimental group. The cows were examined during the study period. It was found that the preventive effect was shown by all the solutions of drugs used at the initial stage and with a small area of hoof tissue lesion. The studied solutions of drugs did not contribute to wound healing with a stronger lesion of hooves. Timely clearing and trimming with the formation of the correct anatomical shape of the hooves is a significant effect in the prevention of diseases in cows. Conclusions Orthopedic diseases in cows are a widespread pathology. The level of cows' morbidity with this pathology in farms, depending on the use of preventive medical care is in the range of 20 to 45%, and regardless of veterinary hygiene, the hind limbs are affected. Preventive examination of animals, systematic treatment of hooves is the most essential link in the complex preventive medical measures that are aimed at combating this disease. Individual cleaning and therapy of animals was provided in the distal limbs with the use of "EMS-I type A". This depended on the degree and area of tissue damage, provided recovery of animals after 2-3 bandages. Cattle driving through hoof baths filled with a foam solution of the "EMS-I type A" product in order to prevent pathologies of the hooves has a deterrent effect on the growth of the disease.
2021-10-21T15:05:40.517Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "ab7a12ef95b3de2dfbc28c654fd4f3b4263a99db", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/87/e3sconf_epsd2021_09002.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5b8213fc85c07fb6eabfeedf9a59e81a2b0329f9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
4413546
pes2o/s2orc
v3-fos-license
Complete subunit architecture of the proteasome regulatory particle The proteasome is the major ATP-dependent protease in eukaryotic cells, but limited structural information strongly restricts a mechanistic understanding of its activities. The proteasome regulatory particle, consisting of the lid and base subcomplexes, recognizes and processes poly-ubiquitinated substrates. We used electron microscopy and a newly-developed heterologous expression system for the lid to delineate the complete subunit architecture of the regulatory particle. Our studies reveal the spatial arrangement of ubiquitin receptors, deubiquitinating enzymes, and the protein unfolding machinery at subnanometer resolution, outlining the substrate’s path to degradation. Unexpectedly, the ATPase subunits within the base unfoldase are arranged in a spiral staircase, providing insight into potential mechanisms for substrate translocation through the central pore. Large conformational rearrangements of the lid upon holoenzyme formation suggest allosteric regulation of deubiquitination. We provide a structural basis for the ability of the proteasome to degrade a diverse set of substrates and thus regulate vital cellular processes. Purified endogenous holoenzyme and lid used in EM analyses. Holoenzyme or lid was purified from yeast strains containing the indicated tag or deletions and a FLAG tag on Rpn11 (with the exception of Rpn1-FLAG) as described, separated by SDS PAGE, and stained with Sypro Ruby. +Ab indicates that sample was incubated with anti-FLAG antibody and purified by gel filtration. * indicates more prominent background bands. # indicates the loss of Rpn10 from Rpn1-FLAG holoenzyme after incubation with anti-FLAG antibody. Figure S3: Michaelis-Menten analyses for substrate degradation by holoenzymes reconstituted with recombinant and endogenous lid. Ubiquitinated GFP-titin-cyclin fusion substrate was degraded at 30 °C by proteasome holoenzyme (200 nM) reconstituted from base, 20S core, and recombinant or endogenous lid. Values for K M and v max were 1.8 µM and 0.24 min -1 enz -1 for holoenzyme with endogenous lid, and 3.0 µM and 0.21 min -1 enz -1 for holoenzyme with recombinant lid. The lack of post-translational modifications in E. coli might account for these differences. Micrographs of negatively stained endogenous (left) and recombinantly expressed (right) yeast lid subcomplexes. Corresponding 2D class averages of the particles are shown directly below the micrograph, demonstrating that the recombinant lid exhibits the same overall morphology as the endogenous. Proteasome holoenzymes preserved in a frozen-hydrated state. Proteasome particles can be observed adopting a range of orientations, even a vertical position, when imaged through thick ice. Reference-free 2D class averages (beneath the micrographs) show a variety of views. Figure S6: Estimated resolutions. Resolutions of the reconstructions were estimated using a Fourier shell correlation of back projected even/odd datasets, using a criterion of 0.5 correlation. Reported resolutions for the endogenous and recombinant negative stain lid structures are 15 Å and 16 Å, respectively. The resolution for the cryoEM reconstruction is estimated to be 8.8 Å. Figure S7: Local resolution map of the holoenzyme cryoEM density. A local resolution calculation of the proteasome reconstruction shows a range of resolutions within the map. In grey are surface representations of the reconstruction, and shown below are crosssections through the center of the density. The cross sections are colored according the map's calculated local resolution, with the highest resolution portions in dark blue, and the lowest resolution areas in red. Notably, the core particle and AAA+ ATPases contain the highest resolution data, and the ubiquitin receptors are the lowest in resolution. Localization of lid subcomplex subunits by MBP labeling. a) Constructs bearing a FLAG tag on RPN7 and an MBP tag at either the N-or C-terminus of specific subunits were recombinantly expressed in E. coli and purified for EM analysis. MBP tags can be clearly observed as a small bright density attached to the subcomplex in reference-free 2D class averages. Three representative class averages for each of the analyzed constructs are shown in the leftmost column. Each MBP tag was unmistakably identifiable in the canonical front view of the lid particles, with the exception of the N-terminal Rpn11, which was only visible in a tilted view of the particle. The corresponding forward projection and surface representation of the recombinant lid reconstruction is shown to the right of each set of class averages, indicating the subunit localization with a red arrowhead. Notably, we see decreased density for the N-terminal portions of Rpn6, which is caused by a fraction of particles with N-terminal Rpn6 truncations (see also Fig. S1). We were able to select for a full-length Rpn6 by purifying the complex using a FLAG tag on Rpn6 (bottom panel) b) A Rpn12-deletion mutant clearly shows the location of Rpn12 in the lid complex. The difference density between the recombinant lid and recombinant Rpn12 delete lid is shown in green. Unambiguous docking of the crystal structure for yeast 20S core. Docking of the 20S core structure (PDBid: 1ryp) into the EM density provides an asymmetric orientation of the core relative to the base. a) Cross section of the crystal structure docked into the EM density, showing the high level of correlation between the molecular envelope of the electron density and the secondary structural elements of the atomic coordinates. b) Extended α-helix of the α4 subunit. The helix was extended to include the entire Cterminus. c) The insertion-loop of subunit α2 is obvious in the EM density. Localization of Rpn1, Rpn2, and ubiquitin-interacting subunits. a) Reference-free class averages and reconstruction of an N-terminal GST fusion of Rpn2 revealed its location on the top of the holoenzyme. b) Antibody labeling of a C-terminal Rpn1 FLAG tag results in 2D class averages showing a dimeric antibody with the single-chain variable fragments attached to Rpn1. The view observed in the class averages is depicted using the holoenzyme reconstruction, with an antibody modeled alongside it. c) A small subset of all holoenzyme preparations resulted in particles that had lost the lid subcomplex. Although there were not sufficient views of these aberrant particles to generate a 3D reconstruction, a theoretical model generated by including only Rpn1, Rpn2, Rpt1-6, and core particle subunits accurately represents the observed class averages (shown as forward projection and surface rendering). d, e) Deletion mutants were used to generate difference maps (colored) that indicate the locations of the ubiquitin receptors Rpn10 and Rpn13, respectively. f) Due to the variability of Ubp6, this DUB was localized by subtracting the variance map of the wild-type holoenzyme from a variance map of an Ubp6-deletion mutant. Difference variance map is colored magenta. Figure S11: Rpn6 contacts subunit α2 of the 20S core. Our cryoEM reconstruction of the proteasome indicates a direct contact between the α2 subunit of the core particle and Rpn6 of the lid. To confirm this contact by crosslinking, we engineered a cysteine in α2 either at the predicted point of contact (A249C) or nearby (D245C), and conjugated sulfo-MBS, a short (7.3Å spacer arm) heterobifunctional crosslinker, to this site. The core particle contains other cysteines, but those are relatively inaccessible to cysteine-reactive modifying agents (data not shown). Crosslinker-conjugated (or mock-conjugated) core particle purified from strains containing WT, A249C, and D245C α2 was incubated with purified base, Rpn10, 0.5 mM ATP, and lid purified from a yeast strain in which Rpn6 was C-terminally tagged with a 3x hemagglutinin (HA) tag. Reactions were divided equally for separation by SDS-PAGE followed by either coomassie staining or anti-HA western blotting. Rpn6 has a molecular weight of 50 kDa, α2 of 27 kDa, and a crosslink between them should create an anti-HA-reactive band above 77 kDa. This crosslinked band appears only when the cysteine is placed at A249C, closest to the predicted contact between α2 and Rpn6. The two different crosslinked products likely represent crosslinking to two different sites on Rpn6.
2018-02-11T06:26:59.495Z
2011-12-20T00:00:00.000
{ "year": 2011, "sha1": "6c19474e243afa73ca3b8ff3147028519fd28cc0", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3285539?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "20f13106a403a9d704a0274aeba76e367110bb6a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
45547110
pes2o/s2orc
v3-fos-license
Optimized management of advanced hepatocellular carcinoma: four long-lasting responses to sorafenib. The therapeutic options for hepatocellular carcinoma (HCC) have been so far rather inadequate. Sorafenib has shown an overall survival benefit and has become the new standard of care for advanced HCC. Nevertheless, in clinical practice, some patients are discontinuing this drug because of side effects, and misinterpretation of radiographic response may contribute to this. We highlight the importance of prolonged sorafenib administration, even at reduced dose, and of qualitative and careful radiographic evaluation. We observed two partial and two complete responses, one histologically confirmed, with progression-free survival ranging from 12 to 62 mo. Three of the responses were achieved following substantial dose reductions, and a gradual change in lesion density preceded or paralleled tumor shrinkage, as seen by computed tomography. This report supports the feasibility of dose adjustments to allow prolonged administration of sorafenib, and highlights the need for new imaging criteria for a more appropriate characterization of response in HCC. Abstract The therapeutic options for hepatocellular carcinoma (HCC) have been so far rather inadequate. Sorafenib has shown an overall survival benefit and has become the new standard of care for advanced HCC. Nevertheless, in clinical practice, some patients are discontinuing this drug because of side effects, and misinterpretation of radiographic response may contribute to this. We highlight the importance of prolonged sorafenib ad-ministration, even at reduced dose, and of qualitative and careful radiographic evaluation. We observed two partial and two complete responses, one histologically confirmed, with progression-free survival ranging from 12 to 62 mo. Three of the responses were achieved following substantial dose reductions, and a gradual change in lesion density preceded or paralleled tumor shrinkage, as seen by computed tomography. This report supports the feasibility of dose adjustments to allow prolonged administration of sorafenib, and highlights the need for new imaging criteria for a more appropriate characterization of response in HCC. INTRODUCTION Therapeutic options for advanced hepatocellular carcinoma (HCC) have been so far inadequate [1][2][3][4][5] . Recent progress in molecular biology has allowed identification of new therapeutic targets, including vascular endothelial growth factor (VEGF), which is overexpressed and related to progression-free survival (PFS) and overall survival (OS) in HCC [6,7] . Sorafenib, an oral multi-kinase inhibitor, blocks tumor cell proliferation and angiogenesis by targeting RAF, VEGF receptors, platelet-derived growth factor receptor β, c-KIT and FLT3 signaling pathways [8] . Efficacy and safety of sorafenib have been demonstrated by phase Ⅱ/Ⅲ trials, which have proven that 400 mg bid sorafenib significantly prolongs OS, reduces the risk of death by 31%, and increases the time to radiological progression [9][10][11] . Consequently, sorafenib has become the standard of care for patients with advanced HCC. CASE REPORT We present four cases of unresectable, systemic-treatment-naïve HCC, enrolled in phase Ⅱ (HOPE) and Ⅲ (SHARP) trials at the Department of Medical Oncology and Hematology of the Istituto Clinico Humanitas in Rozzano (Milan, Italy), who obtained long-lasting objective responses. All patients were diagnosed by histology and computed tomography (CT) or magnetic resonance imaging (MRI). Their peculiar responses and the personalized management of side effects provide suggestions to optimize the use of sorafenib in the management of HCC patients. Patient 1 In December 2002, a 61-year-old caucasian man was examined by MRI, which showed a single 110 mm × 140 mm hepatic lesion and thrombosis of the main and right branches of the portal and superior mesenteric veins. The patient was Child-Pugh class A, Eastern Cooperative Oncology Group (ECOG) performance status (PS) 0, with α fetoprotein (AFP) of 10 ng/mL. Sorafenib treatment was started in January 2003. In February 2003, grade 2 diarrhea was controlled with loperamide. In April 2003, the lesion measured 70 mm × 66 mm; grade 3 diarrhea appeared and resolved in July 2003, after a 3-d pause from sorafenib and daily loperamide. In December 2003, persistent grade 2/3 diarrhea reappeared, and resolved after a 50% reduction in sorafenib dose. In January 2004, treatment was discontinued due to progressive disease: a new lesion appeared at the edge of the previous one. From March 2003 to January 2004, AFP values progressively increased from 20 to 218 ng/mL. The patient achieved a 70% objective response according to World Health Organization (WHO) criteria, and 12 mo PFS. The patient died in June 2005. Patient 2 A 63-year-old caucasian man was diagnosed with hepatitis B and C virus-related cirrhosis in 1983, and with HCC in November 2002. In February 2003, at baseline, a 20 mm × 25 mm lesion in hepatic segment Ⅳ and a non-clearly definable lesion in segment Ⅱ were visible. Gynecomastia, moderate thrombosis and erythema were managed, since March 2003, with periodical treatment pauses. In September 2003, sorafenib dose was reduced by 50% (400 mg/d) due to persistent grade 2 diarrhea. A minor response was detected in May 2004, and the lesion in segment Ⅳ reduced to 15 mm × 15 mm (55%) in July 2004; the lesion in segment Ⅱ lost density and became unde-tectable on subsequent scans. In August 2005, after 30 mo on therapy, HCC disease progression was observed as an increase in the number of tumor lesions and an increase in the diameter of existing lesions. We observed progression in the liver and not at other sites. Thereafter, the patient was lost to follow-up and died in April 2008. Patient 3 A 70-year-old caucasian man with hepatitis C virus (HCV)related cirrhosis, with a good PS, was diagnosed in July 2002 with vascular-invading HCC. In January 2003, at baseline, a single hepatic lesion measured 60 mm × 50 mm and AFP was 15 ng/mL. In June 2003, a 50% reduction in sorafenib dose was required due to paresthesia and cramps in the hands and feet. From July to November 2003, the lesion gradually reached 35 mm × 35 mm. At that time, the area originally covered by the lesion ranged from a denser portion, which surrounded the persistent tumor, to a less dense area, towards the normal liver. In October 2004, the lesion was barely visible. In October 2005, a complete response was achieved. AFP slightly decreased throughout the study, stabilizing at 12 ng/mL. In January 2008, after PFS of 60 mo, a new 10-mm lesion was detected in segment Ⅷ, outside the area previously involved. The patient was treated with two chemoembolizations and he is in complete response as of July 2010. Patient 4 A 69-year-old caucasian man who had HCV-related cirrhosis since 1987 was diagnosed with HCC in May 2005. At baseline, in June 2005, the patient presented with three hepatic lesions, thrombosis of the portal vein main branch, Child-Pugh class A and ECOG PS 0. After 10 d on sorafenib, treatment was stopped for 1 mo due to grade 3 hand-foot skin reaction, and restarted at 50% of the dose. In September 2005, a partial response was observed, and the density of the only remaining lesion was reduced. In October 2005, the treatment was paused for 9 d due to grade 3 hand-foot skin reaction. At resolution, treatment was restarted at a dose of 400 mg every other day. In May 2007, the lesion was radiographically unchanged, was biopsied and proven disease free. As of July 2010, 62 mo from enrollment, the patient, still on the reduced dose regimen, maintains a complete response (Figure 1). DISCUSSION Sorafenib is the only effective systemic therapy for the treatment of HCC, but side effects lead to treatment discontinuation in some patients. Nowadays, HCC is treated by hepatologists or oncologists. The former may be less accustomed to the typical side effects of anticancer drugs, and the latter may not be keen to manage problems related to underlying liver cirrhosis. This report proves the importance of a multidisciplinary approach in the management of advanced HCC patients. The described cases highlight how, in case of sorafenibrelated side effects, reductions and pauses in the admin-istered dose can allow long-term treatments. Efficacy of conventional cytotoxic agents is strictly related to the administered dose. With new targeted agents, length of treatment, rather than dose intensity, may be fundamental for tumor control. The winning strategy may lie in managing side effects, and tailoring the anticancer regimen to the characteristics of the patients, rather than discontinuing treatment at the appearance of signs of intolerance. Unfortunately, data on drug blood levels that are needed to achieve and maintain target inhibition are inadequate. The multi-target nature of sorafenib is one additional challenge, which means that various factors play a role in the activity of this agent [6][7][8] . In three of the four reported cases, objective responses were achieved following substantial dose reductions. In patient 2, partial response was observed at month 16 of the study (7 mo at full dose and 9 at half dose). In patient 3, sorafenib dose was reduced by 50% during the month 6 on treatment; an objective response was seen in month 8 of therapy, and a complete response was achieved after a total of 34 mo on study. In patient 4, sorafenib dose was reduced by 50% after just 10 d of therapy, and 20 d later, a partial response was observed. One month later, dose was further reduced to 25%, and complete response was documented after 24 mo on study. The lesson appears clear: the recommended sorafenib dose is 400 mg bid; when needed, dose reductions, by limiting side effects, offer a better quality of life and can allow longterm administration and achievement/maintenance of tumor control. The radiological features of responding lesions are another issue worthy of attention. When assessing the efficacy of targeted therapies by imaging, a gradual change in tumor density and blood flow may be observed before tumor shrinkage. However, the uncommon radiological patterns can lead to late recognition of responses, or worse, to misleading evaluations. Indeed, in two of the reported cases (patients 3 and 4), lesions seemed unchanged and, at first sight, had been considered active. On the contrary, these lesions were responding and being substituted by residual scars (in patient 4, this was confirmed by liver biopsy). This issue deserves serious con-sideration and calls for new, more appropriate methods of appraisal. With targeted therapies, traditional methods of quantitative evaluation, such as WHO criteria or Response Evaluation Criteria in Solid Tumors, may not be optimal, and the need for qualitative standardized measurements becomes more pressing [9,[12][13][14] . Although positron emission tomography is not reliable for evaluation of HCC, Hounsfield unit (HU) density scale for CT scan imaging, and signal patterns for MRI can be used to measure tumor necrosis. Combination of these methods with dimensional measurements allows more precise characterization of sorafenib responses in this disease [13,14] . In our experience, a lesion density reduction to 40 HU on CT scan can be considered indicative of tumor necrosis. An additional mean of response evaluation may be provided by analysis of blood samples. At present, there is no agreement on specific biomarkers of response to sorafenib. We analyzed routine laboratory tests and did not identify any correlation in our patients, however, we did not search for alternative signals such as markers of inflammation and oxidative stress. The recommendations that derive from the reported cases are to tailor dose and schedule, to administer the drug until progressive disease is observed, and to evaluate critically dimensional and density changes in tumor lesions. Anticancer drugs have evolved in recent years, and so should the way physicians view tumor treatment strategies. Although this calls for more personalized treatment plans, agreement on standardized and more appropriate assessment techniques will allow more conscious decision making in the treatment of advanced HCC patients [13,14] . Figure 1 Patient 4 -arterial phase computed tomography scans. A: Pre-sorafenib, the 25-mm lesion in segment Ⅷ was biopsied, and measured > 100 HU. B: Partial response, tumor necrosis and intra-lesional HU of 40-70 after 4 mo on sorafenib. C: After 40 mo on sorafenib, the necrotic area measured < 40 HU intralesionally, and was disease-free at histological examination. The arrows show the liver lesion.
2018-04-03T06:05:10.812Z
2011-05-21T00:00:00.000
{ "year": 2011, "sha1": "f7bf4aa1d8c10a507661446348f46462f6468f42", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v17.i19.2450", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "21d5d48e00c3d01d7c8f2587bc0072d65564a7c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237406412
pes2o/s2orc
v3-fos-license
PLCE1 Polymorphisms Are Associated With Gastric Cancer Risk: The Changes in Protein Spatial Structure May Play a Potential Role Background Gastric cancer (GC) is one of the most significant health problems worldwide. Some studies have reported associations between Phospholipase C epsilon 1 (PLCE1) single-nucleotide polymorphisms (SNPs) and GC susceptibility, but its relationship with GC prognosis lacked exploration, and the specific mechanisms were not elaborated fully yet. This study aimed to further explore the possible mechanism of the association between PLCE1 polymorphisms and GC. Materials and Methods A case-control study, including 588 GC patients and 703 healthy controls among the Chinese Han population, was performed to investigate the association between SNPs of PLCE1 and GC risk by logistic regression in multiple genetic models. The prognostic value of PLCE1 in GC was evaluated by the Kaplan-Meier plotter. To explored the potential functions of PLCE1, various bioinformatics analyses were conducted. Furthermore, we also constructed the spatial structure of PLCE1 protein using the homology modeling method to analyze its mutations. Results Rs3765524 C > T, rs2274223 A > G and rs3781264 T > C in PLCE1 were associated with the increased risk of GC. The overall survival and progression-free survival of patients with high expression of PLCE1 were significantly lower than those with low expression [HR (95% CI) = 1.38 (1.1–1.63), P < 0.01; HR (95% CI) = 1.4 (1.07–1.84), P = 0.01]. Bioinformatic analysis revealed that PLCE1 was associated with protein phosphorylation and played a crucial role in the calcium signal pathway. Two important functional domains, catalytic binding pocket and calcium ion binding pocket, were found by homology modeling of PLCE1 protein; rs3765524 polymorphism could change the efficiency of the former, and rs2274223 polymorphism affected the activity of the latter, which may together play a potentially significant role in the tumorigenesis and prognosis of GC. Conclusion Patients with high expression of PLCE1 had a poor prognosis in GC, and SNPs in PLCE1 were associated with GC risk, which might be related to the changes in spatial structure of the protein, especially the variation of the efficiency of PLCE1 in the calcium signal pathway. INTRODUCTION Gastric cancer (GC) is becoming a worldwide problem year by year, endangering human life and health severely. It was estimated that over one million new GC cases occurred in 2018 and about 783 000 patients died of that, making GC the fifth most frequently diagnosed cancer and the third deadliest cancer worldwide (Bray et al., 2018). China has a large number of GC patients, with a 5-year overall survival (OS) of less than 25% (Chen et al., 2016;Zeng et al., 2018). The pathogenesis of GC is still unclear till now, but some risk factors have been reported, such as helicobacter pylori (Shimizu et al., 2014;Plummer et al., 2015;Jukic et al., 2021), Epstein-Barr virus infection (Camargo et al., 2014), low consumption of vegetables and fruits, high intake of salts and pickles, smoking and obesity (Lunet et al., 2007;Lin et al., 2014;Li et al., 2019). However, these research results are far from enough for us to understand the oncogenesis and susceptibility mechanism of GC. In recent years, the genomic analysis of gastric tumors has highlighted the importance of its gene heterogeneity; and differentiations of GC molecular subtypes may be the key to guiding early diagnosis strategies, identifying new therapeutic targets, and predicting the prognosis of patients. In the last decade, single nucleotide polymorphism (SNP) analysis has been extensively used to screen candidate gene and detect various complex human diseases, providing a way to identify genetic loci associated with the heterogeneity of cancers. Phospholipase C epsilon 1 (PLCE1) gene is one of the largescale candidate genes located at 10q23 and served as a member of the human phosphoinositide-specific phospholipase C family (Song et al., 2001), which exerts an enormous function on growth, differentiation, and oncogenesis (Citro et al., 2007;Bunney and Katan, 2010;Gresset et al., 2012). The most-reported SNPs in PLCE1 were rs2274223 and rs3765524, which have a significant value in increasing the risk of gastrointestinal tumor progression (Cui et al., 2014a;Mocellin et al., 2015;Mou et al., 2015;Xue et al., 2015;He et al., 2016;Gu et al., 2018). However, relevant studies of the associations between PLCE1 and GC susceptibility remain inconsistent presently, and the prognostic value of PLCE1 in GC is unclear; moreover, the specific mechanism between SNPs and GC risk is elusive now. Thus, further studies are still necessary. This study aimed to analyze the relationship between three SNPs (rs3765524, rs2274223, and rs378126) in the PLCE1 gene and GC susceptibility by a case-control study in the Chinese Han population firstly; then we explored the prognostic value of PLCE1 in GC using online databases; finally, we tried to explain the correlation mechanism between the SNPs in PLCE1 and the risk and prognosis of GC from the perspective of variable bioinformatics and protein spatial structure changes. We hope to make a contribute to the further exploration on the possible mechanism of the association between PLCE1 polymorphisms and GC. Study Population A case-control study was conducted, including 588 patients with GC (392 males and 196 females) and 703 healthy control subjects (396 males and 307 females). All subjects were genetically related to Chinese Han. Patients with histologically confirmed GC in the Second Affiliated Hospital of Air Force Medical University from January 2015 to January 2019 were enrolled. The exclusion criteria for patients were: Patients who had a family history (three generations) of tumors; Those who had received radiotherapy or chemotherapy before blood sampling collection; Patients with any other digestive diseases or caused by metastasis of other cancer. Additionally, the healthy controls were randomly recruited from the physical examination center of the same hospital during the same period when they visited for an annual health examination. When recruiting healthy participants, we investigated the demographic information by personally interviewing through a structured questionnaire by trained personnel, including age, gender, residential region, ethnicity, and family history of cancer and other diseases. The healthy participants who had a family history of cancer were also excluded from the study. After that, we collected 5 mL peripheral blood of each subject to detect the SNPs of the PLCE1 gene for our research. All participants were voluntarily recruited and provided written informed consent before taking part in this study. All research analyses were performed following the approved guidelines and regulations. This study was approved by the Research Ethics Committee of the Second Affiliated Hospital of Air Force Medical University (K201501-05) and abided by the Declaration of Helsinki. Genotyping Agena MassARRAYAssay Design 4.0 software was used to design the multiplexed SNP Mass EXTEND assay. The PLCE1 gene rs3765524, rs2274223, and rs3781264 polymorphisms were genotyped on the Agena MassARRAY RS1000 platform according to the standard protocol (Applied Biosystems, Foster City, CA, United States). Then, Agena Typer 4.0 software was applied to analyze and manage our data. Bioinformatics Analysis The Prognostic Value of PLCE1 in GC The Kaplan Meier (K-M) plotter 1 was used to evaluate the prognostic value of mRNA expression of PLCE1in GC patients. They were divided into high-and low-expression groups according to median values of mRNA expression and validated by K-M survival curves, with the hazard ratio (HR) with 95% confidence intervals (CIs) and Logrank P-value. PLCE1 Associated Genes Screening and Enrichment Analysis STRING database 2 (Szklarczyk et al., 2019) was applied to detect co-expression genes with PLCE1 in GC, and Cytoscape software (Smoot et al., 2011) Protein Homology-Modeling and Vitalization The amino acid (aa) sequence of PLCE1 protein was obtained through NCBI. 4 We used SWISS-MODEL 5 to perform PLCE1 protein homology-modeling from its primary sequence (Schwede et al., 2003;Waterhouse et al., 2018). The protein with the highest coverage of the primary sequences was selected as the most homologous protein. We download the files of the constructed protein spatial structures in SWISS-MODEL and then opened them in PyMOL version 2.4 6 for protein visualization to pave the way for PLCE1 protein spatial structure analysis (Arroyuelo et al., 2016;Yuan et al., 2016). Statistical Analysis SPSS 26 (IBM SPSS Statistics, RRID:SCR_019096) software was applied to analyze the general characteristics of GC patients and healthy control groups. Welch's t-test and the Pearson Chi-square test were applied to analyze differences of the basic characteristics between the two groups. The Pearson Chi-square test was also used to assess deviation from Hardy-Weinberg equilibrium (HWE) to compare the observed and expected genotype frequencies among the control subjects. Allele and genotype frequencies were compared between GC patients and healthy controls using the Pearson Chi-squared test and Fisher's exact test. To evaluate the associations between PLCE1 SNPs and the risk of GC, we calculated odds ratios (ORs) and 95% confidence intervals (CIs) adjusted by gender and age. Three different genetic models were applied (the codominant model, the dominant model and the recessive model) using PLINK software (PLINK, version 2.0, RRID:SCR_001757). p-value < 0.05 was considered statistically significant in all statistical tests in this study. Demographic Characteristics The primary characteristics of all subjects were shown in Table 1. A total of 1,291 participants, including 588 GC patients and 703 healthy controls, were enrolled in this study. The mean age was P-values were calculated using Welch's t-test/Pearson Chi-square test. SD, standard deviation. *P < 0.05 indicates statistical significance, which was marked in bold. Frontiers in Genetics | www.frontiersin.org 58.12 ± 11.66 years in GC patients and 48.57 ± 9.43 years in healthy controls, which indicated that the patients were elder than the healthy participants (P < 0.001). Besides, the scale of males was larger than females in the GC group (male to female is 66.67-33.33%), while the difference between males and females in the control group was minor (male to female is 56.33-43.67%). The difference in the distributions between GC patients and healthy controls suggested that the ORs and p-values need to be adjusted according to age and gender in subsequent analysis. Additionally, most of the participants in the study had an adverse family cancer history (cases, 96.3%; controls, 98.0%). Moreover, nearly onethird (30.3%) of patients were at an early stage (the carcinoma was confined to the gastric mucosa and submucosa). Genotyping Analysis The detailed information of the three selected SNPs, including roles, MAF, and HWE P-values, were listed in Table 2. These SNPs were genotyped successfully in further analysis. MAF of all SNPs was greater than 5%, and the observed genotype frequencies of all SNPs in the control groups were in HWE (P > 0.05). ORs and P-values were adjusted by age and gender. OR, odds ratio; CI, confidence interval. *P < 0.05 indicates statistical significance, which was marked in bold. Frontiers in Genetics | www.frontiersin.org comparing patients with high-expression (red) and low-expression (black) of PLCE1 in gastric cancer by two probes (205112 and 214159) were plotted using the Kaplan-Meier plotter database according to the threshold of P-value of < 0.05. Differences in the frequency distribution of SNPs genotypes and alleles between GC patients and healthy controls were compared by Pearson Chi-squared test and odds ratios (ORs) to evaluate the associations with GC risk, as displayed in Supplementary Table 1. The minor allele of each SNP as a risk factor was compared to the wild-type (major) allele. Remarkably, we found that the allele frequency of rs2274223 locating in the exon region was significantly different between GC cases and healthy controls [OR (95% CI) = 1.20 (1.00-1.45), P = 0.048]. What's more, the genotype of rs3781264 in the intron region was also significantly different between the two groups [OR (95% CI) = 1.43 (1.16, 1.76), P = 0.001]. (2.01-3.16), P < 0.001], which indicated that PLCE1 increased the risk of a poor prognosis in GC patients. PLCE1 PPI Analysis We investigated the PPI network of PLCE1 by STRING website, and we obtained the core network constructed by 11 nodes and 22 edges with an average node degree of 4 (P = 0.004; Figure 2). The interaction proteins with PLCE1 were PIP5K1A, PIP5K1B, PIP5K1C, PIP5KL1, IPMK, ITPKA, ITPKB, HRAS, RAP2B, and RRAS. PLCE1 Protein Spatial Structure Changes We modeled the primary PLCE1 protein by SWISS-MODEL. The original (wild-type) model of PLCE1 was shown in Figure 5A. FIGURE 3 | The GO enrichment analysis of PLCE1 and its co-expression genes by DAVID database. BP (biological process) was marked in green; CC (cellular component) was in orange; and MF (molecular function) was in purple. The protein was colored from blue to red, representing the coiled peptide chain from N-to C-terminal. We found that the PLCE1 protein had two crucial functional domains, namely the calcium ion binding pocket (related to activity), which is composed of 1,873, 1,897, 1,926, 1,928, and 1,933 aa sites (red in Figure 5B), and the catalytic binding pocket (related to catalytic efficiency), consisting of 1,391, 1,392, 1,421, 1,423, 1,436, 1,470, 1,637, 1,639, 1,743, 1,770, and 1,772 aa sites (orange in Figure 5B). Hence, the rs2274223 (A > G) changed the aa at the 1927 site, which may affect the activity of the calcium-binding pocket (yellow in Figure 5C). Similarly, the mutation of rs3765524 (T > C) enabled the aa at the 1,771 site to change, influencing the catalytic efficiency of the catalytic binding pocket (green in Figure 5C). Interestingly, in further analysis of the impact of the single aa mutation on the protein microenvironment, we found that the ARG1927, in the wild type, formed two ionic bonds with MET1901 and SER1903, respectively (Figure 5D), making the interaction force between the two loops extremely tight. However, the mutation (A > G) of rs2274223 resulted in Arg1927His in PLCE1 protein, displayed in Figure 5E; although it still formed ionic bonds with these two aa residues after the mutation, one of them was located on the loop of the 1,927 site itself and formed a conjugate bond, causing the attraction between the residues to be stronger than the original one. Consequently, the loop in 1,927 would be tighter than before, and the calcium-binding pocket was more difficult to open after the mutation, leading to the decrease of the protein (PLCE1) activity. Likewise, in the wild-type, Ile1771 formed two ionic bonds with Gln1687, Val1689 residues, respectively. The interaction force between the ionic bond and the left loop was tight, but no force existed between the loops on the right to "fix" (Figure 5F), so it would be easier for the dissociation in a solution or the local changes, facilitating the substrate entered the active center readily. However, rs3765524 (T > C) lead to FIGURE 4 | The KEGG enrichment analysis of PLCE1 and its co-expression genes by DAVID database. The size of the circle represents the counts of genes enriched, and the larger the circle, the more genes were enriched. From orange to blue, -10log (P-value) gradually decreased. the Ile1771Thr, which generated four ionic bonds with the four aa residues (Gln1687, Val1689, Ser1772, and Leu1798) in the surrounding space, two of which located on the left loop and the others on the right loop, making the local structure more stable, so the change of the catalytic pocket seemed to be more challenging (Figure 5G). These variations mentioned above combined with the results of bioinformatics analysis indicated that SNPs in PLCE1 could change the catalytic activity of the protein in Ca 2+related pathways, so more substrates (such as Ca 2+ ) might be required to perform normal functions, which will be verified in our future studies. DISCUSSION As a common genetic variation in human genome, SNP is beneficial for understanding the possible relationships between tumors and individuals' biological functions on a genomic scale. It provides a comprehensive tool for identifying candidate genes of cancer, offering fundamental knowledge for clinical diagnosis and revealing drug discovery for relevant genetic diseases; therefore, SNP is considered as a kind of commendable biological marker in diverse tumors (Engle et al., 2006). Protein is an indispensable carrier of various biological activities and plays a crucial role in the smooth progress of diverse life courses. The primary structure of a protein is aa sequence, which is derived from gene transcription and translation. It is the basis of a high-order structure of a protein and determines the spatial structure and functional properties of a protein. When a SNP is present in a gene, the expressed aa sequence may change, resulting in a change in the spatial structure of the protein. Therefore, it is imperative to study the risk of SNPs and GC from the perspective of protein spatial structure changes, which will contribute to the research on the pathogenesis and prognosis of GC. In this study, for the first time, we analyzed the correlation between SNPs and GC susceptibility and prognosis in terms of protein spatial structure changes. Firstly, we carried on a casecontrol study, and by detecting and analyzing the differences on SNPs of PLCE1 between GC patients and healthy controls, we found that rs3765524 (C > T), rs2274223 (A > G), The wild-type protein microenvironment analysis of PLCE1 on the single 1927aa site. ARG1927 formed two ionic bonds with MET1901 and SER1903, respectively, making the force between the two loops very tight. (E) The microenvironment analysis of the mutation (rs2274223 A > G) of PLCE1. Arg1927His-mutant of PLCE1 formed ionic bonds with these two aa residues; one of them was located on the loop of the 1927 site itself and formed a conjugate bond, making the attraction between the residues stronger than the wild-type. (F) The wild-type protein microenvironment analysis of PLCE1 on the single 1771aa site. Ile1771 formed two ionic bonds with Gln1687 and Val1689 residues, respectively. (G) The microenvironment analysis of the mutation (rs3765524 T > C) of PLCE1. Ile1771Thr mutant formed four ionic bonds with Gln1687, Val1689, Ser1772, and Leu1798, two of which located on the left loop and the others on the right loop, making the local structure more stable. and rs3781264 (T > C) were related to the susceptibility of GC. Then, the K-M plotter demonstrated that high-expression of PLCE1 was associated with poor survival in GC. To explore the potential function of PLCE1, we used a series of bioinformatics tools, investigating the PPI network, GO and KEGG of PLCE1, and found it played a potential role in the calcium signaling pathway. Furthermore, we constructed the primary and mutant protein spatial structures of PLCE1 by homology modeling method, and interestingly, we found that the changes of the protein spatial structure could reduce the catalytic activity, which might mainly influence its function in Ca 2+ -related pathways. Combined with the bioinformatic results of PLCE1, we speculated that PLCE1 polymorphisms increase GC susceptibility by changing the spatial structure of PLCE1 protein, affecting its activity and catalytic efficiency in the calcium signaling pathway. This hypothesis will be verified in our future experiments. As a member of the phospholipase C family of proteins, PLCE1 encodes a phospholipase C enzyme which mediates the hydrolysis reaction of phosphatidylinositol-4,5-bisphosphate to produce the Ca 2+ -mobilizing second messenger inositol 1,4,5-triphosphate and the protein kinase C-activating second messenger diacylglycerol. It interacts with the protooncogene Ras among other proteins (Bunney et al., 2009). The expression of PLCE1 was significantly related to tumor differentiation degree, invasion depth, lymph node metastasis and distant metastasis (Cui et al., 2014b;Cheng et al., 2017;Yu et al., 2020). We confirmed the significance of the two SNPs previously reported, rs3765524 and rs2274223, and revealed another SNP in PLCE1, rs3781264, through genotyping and logistic regression in this case-control study was associated with the GC risk. Abnet et al. (2010) firstly used GWAS to identify those variants of PLCE1 had a significant correlation with GC in the Chinese Han population Until now, an increasing number of studies have identified a shared susceptibility locus in PLCE1 such as rs2274223 A > G and rs3765524 C > T for gastrointestinal cancer (Abnet et al., 2010;Umar et al., 2013;Cui et al., 2014b;Liu et al., 2014;Malik et al., 2014;Mocellin et al., 2015;He et al., 2016;Gu et al., 2018;Li et al., 2018;Liang et al., 2019;Xie et al., 2020), and the most reported SNP of PLCE1 was the former, but the conclusions lack consistency. A meta-analysis showed that PLCE1 rs2274223 polymorphism resulted in susceptibility to esophageal and GC in Asians (Umar et al., 2013). However, another study suggested that an increased association between rs2274223 and GC risk among Asian ethnic groups could only be observed in esophageal cancer rather than GC (Xue et al., 2015). The discrepancy probably results from considerable heterogeneity in these studies as well as gene-gene interaction and gene-environment interaction. A study (Liang et al., 2019) also confirmed our hypothesis at the protein level by immunohistochemistry (IHC), which confirmed that the PLCE1 protein expression was higher in group of rs3765524 CT/TT than in group of rs3765524 CC. Additionally, our study also showed that rs3781264, located on an intron region, had a potential relationship with GC risk, which was scarcely reported before. Hitherto, most the previous studies focus on the correlation between gene SNPs and cancer susceptibility or risk but never explore its mechanism further. Currently, the diagnosis, treatment and prognosis of GC are usually based on a risk stratification system. The most efficient curative therapeutic option for GC patients is timely adequate surgical resection (Lutz et al., 2012). Besides, chemotherapy, as a way of second-line treatment, can improve overall survival (Kang et al., 2012). Although we have some understanding of carcinogenesis of GC, early diagnosis and appropriate therapy methods on GC patients still remain a major clinical challenge till now (Choi et al., 2003;Ang and Fock, 2014). It is essential for individuals to identify high-risk GC; thus, more precise gene loci associated with it should be explored. In this study, the K-M plotter analysis was performed in the online bioinformatics database, and both two probes showed that the patients with high mRNA expression of PLCE1 would have a poorer prognosis. It was suggested that PLCE1 might have the potential to be a biomarker for the prognosis of GC. The function of a protein is significantly determined by the spatial structure, which is an indispensable part of protein research. In this study, we analyzed the changes of PLCE1 protein spatial structure after mutations by homology modeling method; and we found it had two important functional domains, calcium-binding pocket related to its protein activity and Ca 2+ binding pockets associated with the efficiency of Ca 2+ , which were never reported before. Interestingly, the two SNP sites we focused on, rs2274223 and rs3765524, were located on these important domains. The mutation in rs2274223 affected the Ca 2+ binding pockets, deregulating its bioactivity efficiency related to Ca 2+ , and the T > C change in rs3765524 resulted in the efficiency decrease in catalytic activity. All these above together altered the structure, stability, and function of PLCE1 protein. Therefore, by our research, we suppose that SNPs of PLCE1 may have potential significance in the tumorigenesis and progression of GC, perhaps mainly attributed to the changes of the protein activity, but further studies are needed to confirm. In summary, this study for the first time analyzes the correlation between SNPs of PLCE1 and GC in terms of protein spatial structure changes, which has a great significance to the diagnosis and treatment for patients with GC. The more complex connections or the subtle crosstalk will be verified in our future paper, and actually, this experiment is being carried out in full swing. There were some limitations in this study. Firstly, we selected only three SNPs of PLCE1, and more other potentially significant loci were not included in this case-control study. Secondly, the prognostic value of PLCE1 was investigated in the patients from the online database but not the subjects included in our study, which probably caused background heterogeneity. Thirdly, the mechanism of potential significance in the tumorigenesis and progression of GC was based on the bioinformatic results and the protein homology modeling analysis but lack of experimental verification. Therefore, studies in vitro and in vivo are needed and will be performed in the future to confirm our results, and we hope to contribute to the era of precise diagnosis and individualized treatment of GC. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Research Ethics Committee of The Second Affiliated Hospital of Air Force Medical University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS XH, LYu, and GB designed the research. ZY, SC, JX, and SD performed the study. XH, JJ, SP, PY, LYu, and LYa analyzed the results. XH and JJ edited and commented on the manuscript. All authors contributed to the article and approved the submitted version. ACKNOWLEDGMENTS We would like to acknowledge all the participants involved in this study.
2021-09-04T13:32:53.146Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "2baa2b5d767ebc03535a278c78b268337bbe7c0a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.714915/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2baa2b5d767ebc03535a278c78b268337bbe7c0a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55552345
pes2o/s2orc
v3-fos-license
Physic-chemical evaluation of leach and water from the Borba Gato streamlet within the catchment area of the urban waste landfill of Maringá, Paraná State, Brazil . The physic-chemical characteristics of leach deposited in the landfill waste pond and of water from the Borba Gato streamlet are evaluated. Twenty-six physic-chemical parameters were analyzed from three collection sites, or rather, two in the streamlet, one upstream (P-01) and one downstream (P-02) of the landfill waste pond, and one in the leach deposit pond (P-03). The streamlet area under analysis was impacted due to being in an agricultural area and for its urban waste deposits. Parameter concentrations of aluminum, iron and mercury were reported above the quality standard of freshwater, according to Conama 357/2005 resolution (class 2). Further, throughout the rainy period, the ammoniac nitrogen content was above the resolution quality standard for fresh water. Moreover, landfill leach was above standards of effluent discharge established by Conama 357/2005. An efficient treatment for the effluent generated in Maringá is required since there is evidence of leach pollution of the Borba Gato streamlet. Introduction The quality of water sources is highly influenced by the type of soil occupation and its use on the neighboring margins.Changes in the physical, chemical and biological characteristics of any natural environmental affect, directly or indirectly, its fauna and flora and, as a consequence, human beings. The Borba Gato streamlet, approximate 8 km long, lies totally within the municipality of Maringá, Paraná State, Brazil.Its source lies in the Horto Florestal, a natural reserve within the urban area, and flows into the Pinguim streamlet (in the rural area) which, in its turn, discharges into the Ivaí river.The basin of the Borba Gato streamlet covers urban, rural, preservation, mineral exploration and urban waste landfill areas. Current research studies the area of influence of the waste landfill of Maringá.In fact, one of the most serious problems in the final deposition of urban wastes occurs when waste decomposes and gases and leachate are released in the environment with several impacting issues such as water, soil and air pollution.Since the waste landfill lies some 70 m from the streamlet margins, current investigation evaluates the physical and chemical characteristics of the landfill leach deposited in the pond and in the water of the Borba Gato streamlet. Collections were undertaken at the streamlet surface with a flask immersed 10 to 15 cm, with its mouth placed opposite the current.Leach samples were retrieved by a submersed cylindrical collecting recipient locked to a 1.5 L plastic bottle by an iron rod. Table 1 shows the evaluated parameters and methods used.Rainfall data were provided by the climatologic station of the State University of Maringá, Paraná State, Brazil.Further, t test verified whether there were any differences between the sites upstream (P-01) and downstream (P-02) the waste landfill.The following hypotheses were studied for each parameter, with 5% level significant: H 0 : no difference between means of rates of the parameter i of P-01 and P-02 H 1 : difference exists between the means of rates of parameter i of P-01 and P-02, in which: i = total chloride, true color, BOD 5 , zinc and mercury.Confidence interval (CI) was undertaken at 95% confidence level for parameters with significant difference. Results and discussion Current study shows that rainfall is a hydrological factor that disseminates pollutants in the water body due to the lack of dense vegetation or riparian vegetation within the river boundary, with the subsequent discharge of organic and inorganic matter into the streamlet.Rainy period was more intense between September and March 2006 (Figure 2).Results are the following.Temperature: In most months environmental temperature at P-01 and P-02 was higher than that of the streamlet (Figure 3A).Temperature of leach deposited in the waste pond was higher than that of the environmental temperature between October 2005 and March 2006 (Figure 3B). According to Gastaldini and Mendonça (2003), temperature affects physical, chemical and biological processes in water bodies and influences the concentrations of several variables.Increase in temperature is followed by an increase in chemical reaction speed and by a decrease in the solubility of gases in water, such as oxygen (O 2 ), carbon dioxide (CO 2 ), nitrogen (N 2 ) and methane (CH 4 ).2005).The decrease may be related to rain intensity which transported a greater volume of inorganic material to the streamlet bed, with a decrease in dissolved oxygen and rise in temperature which reduced gas solubility and increased the oxidation process of the transported material.Statistical analysis showed that mean dissolved oxygen at P-01 and P-02 did not differ significantly (t = -1.166;p = 0.248). There was less concentration of leach in the pond, ranging from not detected amounts to 6.42 mg L -1 O 2 (Figure 4B).It is possible that oscillation occurred due to the increase in organic material in the months with low oxygen concentrations. Atmosphere and photosynthesis are the main oxygen sources for water.On the other hand, losses occur by the decomposition of organic matter, respiration of water organisms and oxidization of metallic ions, such as iron and manganese (ESTEVES, 1998). pH: Sampled sites in the streamlet registered pH levels within the fresh water quality standard stipulated by Conama 357/2005 (Class 2) (Figure 5A).Mean pH difference was significant (t = 2.680; p = 0.009; CI = 0.045 to 0.309), or rather, pH at P-02 was higher than that of P-01, with 95% certainty that P-02 differed from P-01 at a confidence interval (CI) between 0.045 and 0.309.Moreover, pH of leach deposited in the pond was alkaline and within the effluent discharge standard stipulated by Conama no.357/2005 (Figure 5B).6A).There were no significant differences by t test in the levels of true color at P-01 and P-02 (t = 1.389; p = 0.169). True color of P-03 had its highest peak in November (Figure 6B) possibly due to the deposition of building residues around the pond to increase its depth.There was a decrease in concentration in March 2006 probably due to building leftovers deposited on the road leading to the pond to minimize the percolate flow.Media reports revealed the possibility of flooding because of rain intensity.Total chloride: Concentrations of chloride in the waste pond were lower than 250 mg L -1 , the limit established by Conama 357/2005 (Figure 7A).There was significant difference (t = 9.165; p = 0.000; CI = 7.52 and 11.78 mg L -1 ) in chloride concentration between P-01 and P-02. Chloride concentrations in leach increased successively throughout the first six months of current research (dry season), with a decrease in September and October, and a rise in November (Figure 7B).Carmo et al. ( 2005) report that possible sources of chloride in water bodies are sewages and fertilizers.However, current study pinpoints leach as possible source of chlorate downstream the waste landfill.It may be noted that peak in October at P-02 occurred due to the placing of two more drain pipes in the landfill waste pond in the direction of the stream, at the beginning of October. When leach in the landfill waste pond is taken into account, the ammoniacal nitrogen at P-03 was above discharge standards established by Conama 357/2005 (Figure 8B).Consequently, treatment of the effluent prior to discharge into the environment is mandatory.It must be emphasized that the highest peak in ammoniacal and organic nitrogen downstream the landfill occurred during the month when two more drainage tubes were placed in the percolate pond.Leach is therefore contributing towards the pollution of Borba Gato streamlet.Nitrite and Nitrate: Whereas nitrite oscillations levels were higher contents than those stipulated by Conama no.357/2005, nitrate was found within the specifications of Conama (Figure 9A).Differences between P-01 and P-02 in nitrate (t = 2.542; p = 0.014; CI = 0.14 and 1.17 mg L -1 N) and in nitrate (t = 3.297; p = 0.002; CI 0.23 and 0.93 mg L -1 N) rates were statistically significant.Nitrite levels in the leach ranged between 15.00 and 40.00 mg L -1 N; however, nitrate contents ranged between 25.00 and 54.00 mg L -1 N (Figure 9B).According to Esteves (1998), nitrite in high concentrations is toxic to most water organisms.Baird (2002) reports that there has recently been a great concern with the increase of nitrate ion rates in fresh water, especially in rural areas.This is due to the fact that the main source of the ion is the discharge from agricultural land to rivers and streams.The statement is actually a source of preoccupation since in the area under analysis there is no other drinking water source for the inhabitants who retrieve water from wells and springs. Oxygen demand: Biochemical (BOD 5 ) and chemical (COD) oxygen demand varied at the sites (Figure 10A).However, statistical tests revealed that there was no significant difference between results of the two analyzed sites for BOD 5 (t = -0,194; p = 0.847) and COD (t = -0.067;p = 0.947). Variations in the organic matter of the leach showed that COD had higher rates in November and BOD 5 in December (Figure 10B). Low source of organic pollution has been reported in the region of the stream under analysis with the exception of March 2006 when heavy rainfall occurred immediately before collection at P-01.Concentrations of BOD 5 close to 5 mg L -1 O 2 occurred in June, although low levels of organic and ammoniacal nitrogen during this month showed that pollutants were not discharged at a recent date (near the place).According to Gastaldini and Mendonça (2003), water with high concentrations of organic and ammoniacal nitrogen and small concentrations of nitrates and nitrites cannot be taken as safe due to recent pollution.On the other hand, samples without organic nitrogen or without ammoniacal nitrogen and with slight traces of nitrate may be relatively safe due to the fact that nitrification has already occurred and pollution was not a recent event.Total solids: P-02 did not have the same profile as P-01 with regard to suspended solids perhaps due to modifications undertaken by the Municipal Public Works Department at the base of the landfill and around the waste pond during May and November, which discharged more suspended solids in the stream and in the pond (Figure 11). Level differences in suspended solids between sampling sites was significant (t = 2.182; p = 0.032; CI = 0.29 and 6.33 mg L -1 ); similarly with regard to Increase in the suspended solids concentrations in March 2006 in the stream was due to heavy rainfall immediately before collection.Peak during August at P-01 may be due to the end of the wheat harvest and the start of soil preparation for the planting of soybeans, corn or cotton.In fact, the soil was fallow and rains transported a great concentration of organic and inorganic material to the streamlet. Sulfate and sulfide: Highest level of sulfur were detected as oxides, or sulfates.However, sites P-01 and P-02 had concentrations within the maximum sulfate limit stipulated by Conama 357/2005 (250 mg L -1 SO 4 -2 ) (Figure 12A).Statistical analysis showed that mean difference of total sulfate between P-01 and P-02 was significant (t = 2.808; p = 0.007; CI = 0.43 to 2.57 mg L -1 SO 4 -2 ); it was also significant for sulfide too (t = 3.237; p = 0.002; CI = 0.002 to 0.011 mg L -1 S -2 ). ) and sulfide (S -2 ) at sampling sites in the Borba Gato streamlet and in the leach deposited in the waste pond. Total sulfate levels in the leach deposited in the landfill waste pond varied between 45.00 and 285.00 mg L -1 SO 4 -2 , whereas sulfide had variations between 0.08 and 0.43 mg L -1 S -2 (Figure 12B).Sulfide concentrations in P-03 were within the limits of the effluent discharge standards (1.0 mg L -1 S -2 ) established by Conama 357/2005. High sulfate levels at P-02 during October were due to the placing of two drainage tubes in the landfill waste pond for leach deposits.During the same month, the effluent had the highest sulfate concentration. Turbidity: Although turbidity concentrations at P-01 and P-02 oscillated, both sites were within the limits of water quality according to Conama no.357/2005 (Figure 13A).Statistical analysis showed that P-01 and P-02 differed significantly (t = 3.247; p = 0.002, CI = 1.42 and 6.05 UNT).Oscillations ranging between 1.40 and 5.76 UNT in the turbidity of the percolate material deposited in the waste pond were reported (Figure 13B). Highest turbidity levels occurred at P-02 in February, similar to what occurred with suspended solids, true color, sulfide, nitrite and nitrate.Dissolved Aluminum, Copper and Iron: Variations occurred in dissolved aluminum, copper and iron (Figure 14), with concentrations above the fresh water quality limits established by Conama no.357/2005 (Class 2) occurring only during some months.Dissolved aluminum rates in the percolate were above the effluent discharge standard during September. Dissolved aluminum levels revealed significant differences between P-01 and P-02 (t = 2.493; p = 0.016; CI = -0.047 to 0.430) which was probably due to the flow of leach of the landfill waste pond polluting the water of the streamlet.Significant differences did not occur with regard to dissolved copper (t = 0.701; p = 0.485) and dissolved iron (t = 1.769; p = 0.081). Cadmium, Chromium, Mercury, Nickel, Manganese, Lead and Zinc: During some months, cadmium, chromium and mercury levels and nickel, manganese, lead and zinc, during most months, at sampling sites on the streamlet were above the water quality limits established by Conama 357/2005.Levels of these elements in the percolated material were within the effluent discharge limits established by Conama 357/2005 (Figure 15).Statistical analysis showed that cadmium levels did not have any significant differences between the sampling sites on the streamlet (t = -0.409;p = 0.683), as occurred with chromium (t = -0.370;p = 0.712), mercury (t = -1.138;p = 0.259), nickel (t = -0.311;p = 0.757), lead (t = 1.283; p = 0.204), zinc (t = -0.558;p = 0.579) and manganese (t = 1.426; p = 0.160).However, the latter element did not have any significant difference due to high concentrations at P-01 in March 2006, which increased the variation range due to heavy rainfall prior to collection at this site. Greatest concern lies with mercury among the trace elements under analysis, since many researchers, such as Baird (2002), Ravichandran ( 2004), Mirlean et al. (2005), consider it the most toxic within the water environment.Further, mercury is a bio-accumulating element.Mercury pollutant source in the streamlet may have been discharged throughout the streamlet's course since in June an organic discharge occurred and pollution was not limited to the site.Further, high levels of cadmium, chromium, nickel and lead were also reported. Further, since the P-01 region is an agricultural area, soil may have retained trace elements, with the exception of cadmium, in agricultural defensive products or in fertilizers which were leached and discharged into the streamlet during the rainy period.In fact, the second largest levels of mercury, nickel and zinc were registered during August, precisely at the end of the wheat harvest and the preparation of soil for sowing.In other words, the soil was exposed to leaching.According to Bisinoti and Jardim (2004), soils have a high capacity in retaining and storing mercury due to the strong links of this element with carbon.Mercury is thus released from the soil by the rains and discharged into the streamlet.In fact, a series of factors may affect the dynamics of mercury. According to Silva and França (2004), agricultural activities and the maintenance of the waste pond close to water courses and the conditions in which such activities develop trigger risk factor involving contamination and pollution of the region's water resources.According to Andreoli et al. (2003), water, as a rule, will always contain impurities and practically pure water is not extant in nature.Consequently, its composition depends on the environment and its assimilation of different pollutants. Conclusion The streamlet area under analysis was impacted, which was probably due to its characteristics as an agriculture land and to inadequate deposition of urban wastes in the municipality of Maringá, Paraná State, Brazil.It should also be emphasized that in the area close to site P-01 the streamlet margins have only rare vegetation and that during the rainy season inorganic compounds, such as aluminum, iron and manganese, inherent to the composition of the soil surface, were discharged in great amounts Since the streamlet reveals parameters outside the fresh water quality standards of Conama's Class 2 features, the streamlet within this region should be characterized as exclusively for landscape, or rather, the water body should be classified as Class 4 according to Conama no.357/05.A study on the entire basin of the Borba Gato streamlet should be undertaken so that pollution sources could be monitored and detected.Further, a management project should be instituted for the protection of the fauna and the flora of this water environment and an appropriate aim for the water course should be determined. Besides the urgent need for efficient treatment of the effluent produced in the landfill waste pond in Maringá has been demonstrated, there is also evidence that leach has polluted the Borba Gato streamlet. Figure 2 . Figure 2. Variation in rainfall level between March 2005 and March 2006. Figure 3 . Figure 3. Temporal variation of environmental temperature (T am), water of Borba Gato streamlet (T) at sampling sites and leach deposited in the waste pond.Dissolved oxygen: Figure 4A shows that dissolved oxygen decreased during the months, albeit within the fresh water quality Standards established Conama 357/2005 (Class 2) (BRASIL,2005).The decrease may be related to rain intensity which transported a greater volume of inorganic material to the streamlet bed, with a decrease in dissolved oxygen and rise in temperature which reduced gas solubility and increased the oxidation process of the transported material.Statistical analysis showed that mean dissolved oxygen at P-01 and P-02 did not differ significantly (t = -1.166;p = 0.248).There was less concentration of leach in the pond, ranging from not detected amounts to 6.42 mg L -1 O 2 (Figure4B).It is possible that oscillation occurred due to the increase in organic material in the months with low oxygen concentrations.Atmosphere and photosynthesis are the main oxygen sources for water.On the other hand, losses occur by the decomposition of organic matter, respiration of water organisms and oxidization of metallic ions, such as iron and manganese(ESTEVES, 1998).pH:Sampled sites in the streamlet registered pH levels within the fresh water quality standard True color: Although true color oscillated, it maintained itself within established standards in most months (75 mg L -1 Pt), following Conama 357/2005, with the exception of March 2006 at P-01 due to heavy rains minutes before the collection and in P-02 in February 2005 and March 2006 (Figure Figure 4 . Figure 4. Temporal variation of dissolved oxygen (DO) at sampling sites in Borba Gato streamlet and leach in the pond. Figure 5 . Figure 5. Temporal variation of pH at sampled sites in the Borba Gato streamlet and leach deposited in the waste pond. Figure 6 . Figure 6.Temporal variation of true color at sampled sites of Borba Gato streamlet and leach deposited in the waste pond. Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar True color (mg L -1 Pt) B Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar P Figure 7 . Figure 7. Temporal variation of total chlorate at sampling sites of Borba Gato streamlet and leach in the waste pond. Figure 8 . Figure 8. Temporal variation of organic nitrogen (Norg) and ammoniacal nitrogen (N-NH 3 ) at sampling sites on the Borba Gato streamlet and leach in the waste pond. Figure 9 . Figure 9. Temporal variation of nitrite (NO 2 ) and nitrate (NO 3 ) at sampling sites of the Borba Gato streamlet and leach deposited in the waste pond. Figure 10 . Figure 10.Temporal variation for chemical (COD) and biochemical (DBO 5 ) oxygen demand at sampling sites in the Borba Gato streamlet and the leach deposited in the waste pond. Figure 11 . Figure 11.Temporal variation of total suspended solids (TSS) and total dissolved solids (TDS) of sampling sites in the Borba Gato streamlet and leach in the waste pond. Figure 13 . Figure 13.Temporal variation of turbidity at sampling sites on the Borba Gato streamlet and the leach deposited in the waste pond. Figure 14.Temporal variation of dissolved aluminum (Al), copper (Cu) and iron (Fe) at sampling sites in the Borba Gato streamlet and the leach deposited in the waste pond. Table 1 . Parameters and methods of the physic-chemical analyses for the study of percolate of the waste landfill and of the water from the Borba Gato streamlet.
2018-12-10T21:06:34.873Z
2012-01-23T00:00:00.000
{ "year": 2012, "sha1": "7cb349ddfa4ef74c63cade8158f86b7d759e548f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4025/actascitechnol.v34i1.6771", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6f5b5ba837a825597a909658195405d75d511a77", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
258960202
pes2o/s2orc
v3-fos-license
Near-Field Communications: A Tutorial Review Extremely large-scale antenna arrays, tremendously high frequencies, and new types of antennas are three clear trends in multi-antenna technology for supporting the sixth-generation (6G) networks. To properly account for the new characteristics introduced by these three trends in communication system design, the near-field spherical-wave propagation model needs to be used, which differs from the classical far-field planar-wave one. As such, near-field communication (NFC) will become essential in 6G networks. In this tutorial, we cover three key aspects of NFC. 1) Channel Modelling: We commence by reviewing near-field spherical-wave-based channel models for spatially-discrete (SPD) antennas. Then, uniform spherical wave (USW) and non-uniform spherical wave (NUSW) models are discussed. Subsequently, we introduce a general near-field channel model for SPD antennas and a Green’s function-based channel model for continuous-aperture (CAP) antennas. 2) Beamfocusing and Antenna Architectures: We highlight the properties of near-field beamfocusing and discuss NFC antenna architectures for both SPD and CAP antennas. Moreover, the basic principles of near-field beam training are introduced. 3) Performance Analysis: Finally, we provide a comprehensive performance analysis framework for NFC. For near-field line-of-sight channels, the received signal-to-noise ratio and power-scaling law are derived. For statistical near-field multipath channels, a general analytical framework is proposed, based on which analytical expressions for the outage probability, ergodic channel capacity, and ergodic mutual information are obtained. Finally, for each aspect, topics for future research are discussed. I. INTRODUCTION As the fifth-generation (5G) wireless network continues to be commercialized, research and development efforts are already underway on a global scale to investigate the possibilities for the sixth-generation (6G) wireless network.Compared to the previous generations of wireless networks, 6G is expected to be data-driven, instantaneous, ubiquitous, and intelligent, thereby facilitating new applications and services, such as extended reality (XR), holographic communication, pervasive intelligence, digital twin, and Metaverse [1]- [3].Therefore, 6G has significantly higher performance targets compared to the past generations, such as a 100 times higher peak data rate of at least 1 terabit per second (Tb/s), an air interface latency of 0.01 ∼ 0.1 milliseconds, and a 10 times larger connectivity density of up to 10 7 devices per square kilometer [4]- [6].To reach these stringent targets, 6G is expected to integrate the following technical trends: • Extremely large-scale antenna arrays: Extremely largescale antenna arrays (ELAAs) are essential for many candidate techniques for 6G.On the one hand, by exploiting ELAAs, supermassive multiple-input multipleoutput (MIMO) [5] and cell-free massive MIMO [7] are capable of providing exceptionally high system capacity through their vast array gain and spatial resolution.On the other hand, reconfigurable intelligent surfaces (RISs) Yuanwei Liu, Zhaolin Wang, Jiaqi Xu, and Xidong Mu are with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK, (email: yuanwei.liu,zhaolin.wang,jiaqi.xu,xidong.mu@qmul.ac.uk). Chongjun Ouyang is with the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, 100876, China (e-mail: dragonaim@bupt.edu.cn). another revolutionary 6G technique, embody an ELAA with passive elements [8].By manipulating the wireless propagation environment, RISs offer new opportunities for augmenting the coverage and capacity of the 6G network. • Tremendously high frequencies: The terahertz (THz) band, spanning from 0.1 THz to 10 THz, holds great potential as a promising frequency band for 6G [6].Compared to the millimeter-wave (mmWave) band utilized in 5G, the THz band provides significantly more bandwidth resources on the order of tens of gigahertz (GHz) [9].Furthermore, due to the very small wavelengths in the THz band, an enormous number of antenna elements can be integrated into THz base stations (BSs), thus facilitating the implementation of ELAA.Due to these benefits, THz communication is expected to support very high data rates on the order of Tb/s [5].• New types of antennas: Metamaterials are powerful artificial materials that exhibit various desired electromagnetic (EM) characteristics for wireless communications [10].In recent years, metamaterials have also been exploited in realizing (approximately) continuous transmitting and receiving apertures, and thus, facilitating holographic beamforming [11], [12].Compared with conventional beamforming techniques, holographic beamforming realized by continuous-aperture (CAP) antennas has a super high spatial resolution while avoiding undesirable side lobes [7]. The significant increase in the size of antenna arrays, extremely high frequencies, and the emerging new metamaterialbased antennas cause a paradigm shift for the EM characteristics in 6G.Generally, the EM field radiated from antennas can be divided into two regions: near-field region and farfield region [13], [14].The EM waves in these regions exhibit arXiv:2305.17751v3[cs.IT] 5 Sep 2024 different propagation properties.Specifically, the wavefront of EM waves in the far-field region can be reasonably well approximated as being planar, as shown in Fig. 1(a).Conversely, more complex wave models, such as spherical waves, see Fig. 1(b), are required to accurately depict propagation in the near-field region.The Rayleigh distance is one of the most common figures of merit to distinguish between the nearfield and far-field regions and is given by 2D 2 /λ [13].Here, D and λ denote the antenna aperture and the wavelength, respectively.From the first-generation (1G) to 5G wireless networks, the near-field was generally limited to a few meters or even centimeters because of the low-dimensional antenna arrays and low frequencies.Therefore, communication systems could be efficiently designed based on far-field approximation.However, given the large aperture of ELAAs and tremendously high frequencies, 6G networks exhibit a large near-field region on the order of hundreds of meters.For instance, a transmitter of size D = 0.5 meters operating at frequency 60 GHz has a near-field region of 100 meters.Furthermore, for the investigation of metamaterial-based CAP antennas, the conventional far-field plane wave assumption is also not suitable [15], [16].Therefore, in 6G networks, the near-field region is not negligible, which motivates the investigation of the new near-field communication (NFC) 1 paradigm.To shed light on the benefits of NFC, we first distinguish between near-field and far-field in terms of their EM-radiation properties. A. Distinction between Near-Field and Far-Field Regions 1) General Field Regions: According to EM and antenna theory, the field surrounding a transmitter can be divided into near-field and far-field regions.The near-field region can be further divided into the reactive and radiating near-field regions.The three regions are explained below [18]: • The reactive near-field region is limited to the space close to the antenna, where the EM fields are predominantly reactive, meaning that they store and release energy rather than propagating away from the antenna as radiating waves.Within the reactive near-field region, evanescent waves are dominant.• The radiating near-field region is located at a distance from the antenna that is greater than a few wavelengths.In this region, the fields have not fully developed into the planar waves that are characteristic of the far-field region.Within the radiating near-field region, the propagating waves have spherical wavefront, i.e., spherical waves are dominant. • The far-field region surrounds the radiating near-field region.In the far-field, the angular field distribution is essentially independent of the distance between the receiver and the transmitter, i.e., planar waves are dominant.Since the reactive near-field region is typically small and the evanescent waves fall off exponentially fast with distance, in the remainder of this paper, we mainly focus on wireless communications within the radiating near-field region.For simplicity, we use the term near-field to refer to the radiating near-field region. 2) Boundary between Near-Field and Far-Field Regions: The transition between the near-field and far-field regions happens gradually and there is no strict boundary between the two regions.As a result, different works have proposed different metrics for characterising the field boundary.For defining these metrics, two different perspectives are used, namely, the phase error perspective and the channel gain error perspective. • From the phase error perspective, several commonly used rules of thumb, including the Rayleigh distance [13], the Fraunhofer condition [14], and the extended Rayleigh distance for MIMO transceivers and RISs [19], have emerged.These distances mainly apply to the field boundary close to the main axis of the antenna aperture.• From the channel gain error perspective, a more accurate description of the field boundary can be given for off-axis regions.Specifically, according to the Friis formula [20], the channel gain falls off with the inverse of the distance squared.However, this does not hold in the near-field region.Therefore, we can define the far-field region as the region where the actual channel gain can be approximated by the Friis formula subject to a tolerable error.Exploiting this perspective, the field boundary depends not only on the aperture size and wavelength, but also on the angle of departure, angle of arrival, and shape of the transmit antenna aperture. B. Related Overview Articles As discussed before, because of the quite small near-field region due to the use of small-scale antenna arrays and low operating frequencies, NFC has not been relevant for 1G-5G wireless networks, and hence, the related literature is very sparse.So far, only a few magazine papers [21]- [24] that provide an introduction to NFC have been published.The authors of [21] presented the basic working principle and applications of near-field-focused (NFF) microwave antennas for short-range wireless systems.They introduced various First-Generation 5G Fifth-Generation 6G Sixth-Generation metrics for NFF performance evaluation, including the 3 dB focal spot, focal shift, focusing gain, and side lobe level.The authors of [22] provided an overview of NFC, covering aspects such as field boundaries, challenges, potential applications, and future research directions.They offered a highlevel introduction to NFC.Furthermore, the authors of [23] studied the difference between far-field beamsteering and nearfield beamfocusing.They emphasized the significant power gain and novel application opportunities that arise from nearfield beamfocusing.Finally, the authors of [24] addressed the cross-far-field and near-field issues in THz communications. They discussed the relevant channel model, channel estimation techniques, and hybrid beamforming approaches for crossfield communications.While [21]- [24] review the general concepts of NFC, fundamental aspects of NFC, including basic channel models, antenna structures, and analytical foundations are not covered.Moreover, a comprehensive tutorial on NFC that specifically caters to the needs of graduate students and researchers seeking to gain a fundamental understanding of NFC is not available in the literature. C. Motivation and Contributions NFC will play a significant role in 6G and fundamental knowledge gaps have to be closed to fully exploit the new opportunities and to address the new challenges arising for NFC.However, a comprehensive tutorial review on NFC is missing in the literature.This is the motivation for this paper and its main contributions can be summarized as follows: • We start by reviewing the basic near-field channel models for both SPD and CAP antennas.For SPD antennas, near-field spherical-wave-based channel models are introduced for both multiple-input single-output (MISO) and MIMO systems, where the specific characteristics of nearfield channels compared with far-field channels are highlighted.Furthermore, we discuss uniform spherical wave (USW) and non-uniform spherical wave (NUSW) nearfield channel models.For CAP antennas, we introduce a Green's function-based near-field channel model.• We study the properties of near-field beamfocusing and antenna architectures for NFC.We commence with the MISO case, for which we discuss hybrid beamforming architectures based on phase shifts and true time delayers for narrowband and wideband systems, respectively.Then, we propose to exploit practical metasurface-based antennas to approximate CAP antennas.As a further advance, we consider the MIMO case, where the dynamic degrees of freedom (DoFs) of near-field channels are addressed.Finally, near-field beam training is discussed, which can help to significantly decrease the complexity of channel estimation and analog beamforming design.• We provide a comprehensive performance analysis framework for NFC for both deterministic line-of-sight (LoS) and statistical multipath channels.For near-field LoS channels, we derive new expressions for the received signal-to-noise ratio (SNR) and power scaling laws for both SPD and CAP antennas.For near-field statistical channels, we propose a general theoretical framework for analyzing the outage probability (OP), ergodic channel capacity (ECC), and ergodic mutual information (EMI).Important insights for NFC are unveiled, including the diversity order, array gain, high-SNR slope, and high-SNR power offset. D. Organization The remainder of this paper is structured as follows.Section II presents the fundamental near-field channel models for both SPD and CAP antennas.In Section III, the basic principles of near-field beamfocusing and beam training are introduced and the related antenna architectures for MISO and MIMO systems are provided.The performance of NFC in respectively LoS channels and statistical channels is analysed in Section IV.Finally, Section V concludes this paper.Lists with the most important abbreviations and variables are provided in Tables I and II, respectively.Fig. 2 illustrates the organization of this tutorial. E. Notations Throughout this paper, for any matrix A, [A] m,n , A T , A * , A H , and ∥A∥ F denote the (m, n)-th entry, transpose, conjugate, conjugate transpose, and Frobenius norm of A, respectively.The matrix inequality A ⪰ 0 indicates positive semi-definiteness of A. [a] i denotes the i-th entry of vector a, and diag{a} returns a diagonal matrix whose main diagonal elements are the entries of a. Also, blkdiag{•} represents a block diagonal matrix, I is the identity matrix, 0 is the zero matrix, ∥•∥ denotes the Euclidean norm of a vector, |•| denotes the norm of a scalar, C stands for the complex plane, R stands for the real plane, and E{•} represents mathematical expectation.The big-O notation, f (x) = O (g(x)), means that lim sup x→∞ |f (x)| g(x) < ∞.CN (µ, Σ) is used to denote the complex Gaussian distribution with mean µ and covariance matrix Σ.Finally, ⊗ and ⊙ denote the Kronecker product and Hadamard product, respectively. II. NEAR-FIELD CHANNEL MODELLING In this section, we introduce the fundamental models for near-field channels.As illustrated in Fig. 3, the space surrounding an antenna array can be divided into three regions, with two distances, namely Rayleigh distance [13], [25] and uniform power distance [25], [26].As previously discussed, the Rayleigh distance can be used to separate the near-field and far-field regions, and it is mostly relevant for characterizing the behaviour of the phase of an emitted signal.Generally speaking, if the propagation distance of the signal is larger than the Rayleigh distance, the far-field planar-wave-based channel model can be employed, leading to a linear phase of the signals.Conversely, if the signal's propagation distance is less than the Rayleigh distance, the near-field sphericalwave-based channel model has to be utilized, resulting in a non-linear phase of the signal.Moreover, within the nearfield region, the uniform power distance can be used to differentiate between regions where the signal amplitude is uniform and non-uniform, respectively.Based on the Rayleigh and uniform power distances, the channel models suitable for characterizing signal propagation can be classified loosely into three categories: 1) uniform planar wave (UPW) model, 2) uniform spherical wave (USW) model, and 3) non-uniform spherical wave (NUSW) model, as shown in Fig. 3.As we will explain in the following sections, due to the different assumptions made, these near-field channel models have different levels of accuracy.Regarding scatterers and small-scale Uniform-Power Distance Rayleigh Distance Non-linear Non-linear Linear Amplitudes Non-uniform Uniform Uniform Distance Fig. 3. Field regions with respect to an antenna array. fading, near-field channel models can be categorized into deterministic and statistical models, with deterministic models utilizing ray tracing, geometric optics, or electromagnetic wave propagation theories for precise channel gain determination, primarily for line-of-sight (LoS) or near-field channels with a limited number of paths.Statistical models, on the other hand, capture the average behavior, fading effects, and timevarying characteristics of the channel, making them suitable for characterizing rich-scattering environments.Concerning transceiver types, near-field channel models can be classified into models for spatially-discrete (SPD) antennas and CAP antennas.Given the intricate taxonomy of channel models, a comprehensive and systematic overview of the various models is needed.In the following, we introduce the USW and NUSW channel models for NFC systems equipped with SPD antennas.Furthermore, we introduce a general model to accurately capture the impact of the free-space path loss, effective aperture, and polarization mismatch.Moreover, we present a near-field channel model for CAP antennas.Finally, important open research problems in near-field modelling are discussed. A. Spherical Wave Based Channel Model for SPD Antennas Fig. 1 highlights the primary distinction between near-field and far-field channels for SPD antennas.Specifically, far-field channels are characterized by planar waves, whereas near-field channels are characterized by spherical waves.Consequently, for far-field channels, the angles of the links between each antenna element and the receiver are approximated to be identical.In this case, the propagation distance of each link linearly increases with the antenna index, resulting in a linear phase for far-field channels.However, for near-field channels, each link has a different angle, leading to a non-linear phase.In the following, we review the near-field spherical-wave-based channel models for MISO and MIMO systems, and highlight the primary differences compared to the far-field planar-wavebased channel model.x , s n y , s n z ] T denote Cartesian coordinates of the receive antenna and the n-th element of the transmit antenna array, respectively.Here, we set s 0 = [0, 0, 0] T as the origin of the coordinate system.Accordingly, r = ∥r − s 0 ∥ denotes the distance between the receiver and the central element of the transmit antenna array. As shown in Fig. 4, for the planar-wave-based far-field channel, the links between s n and r are assumed to have identical angles.Let θ and ϕ denote the azimuth and elevation angles of the receiver with respect to the x-z plane, respectively.Then, the propagation direction vector of the signals from the transmitter to the receiver is given by [27] k(θ, ϕ) = [cos θ sin ϕ, sin θ sin ϕ, cos ϕ] T . (1) Then, according to the planar-wave assumption of far-field channels, the propagation distance for the link between the nth transmit antenna and the receiver can be calculated as r n = r − k T (θ, ϕ)s n , resulting in the following channel coefficient [28]: where β n denotes the channel gain (amplitude) for the n-th link and λ denotes the wavelength of the carrier signal.In the far-field region, the propagation distance r is typically beyond the uniform-power distance 2 , where the channel gain disparity of each link is negligible [25], [26].In this case, we have Therefore, the far-field LoS MISO channel can be modelled in the following simplified form: The above far-field channel model is referred to as the UPW model [27], [29].According to (3), the array response vector for far-field channels is given by Far-Field Array Response Vector It can be observed that for the elements of the far-field array response vector, the phases are linear functions of the positions, s n .On the other hand, for the near-field spherical-wave-based channel, the propagation distances of the links between the transmit and receive antennas cannot be calculated assuming identical azimuth and elevation angles, since different links have different angles.Therefore, the propagation distance of the link between the n-th transmit antenna element and the receiver needs to be calculated as r n = ∥r − s n ∥, resulting in the following channel coefficient [28]: Then, by assuming that propagation distance r is larger than the uniform-power distance, we have In this case, the near-field LoS MISO channel can be modelled as follows: The above near-field channel model is referred to as the USW model [30], [31].The corresponding array response vector is given by Near-Field Array Response Vector In contrast to the far-field array response vector, the phase of the n-th entry of the above near-field array response vector is a non-linear function of s n .specifically, the coordinate of the receiver is given by r = [r cos θ sin ϕ, r sin θ sin ϕ, r cos ϕ] T = rk(θ, ϕ).Therefore, the near-field propagation distance for the n-th antenna array can be calculated as where the last step is obtained by the Taylor expansion For the far-field approximation, only the first two terms in (8) are considered, which leads to The Rayleigh distance is defined as the distance required such that the phase error of the channel caused by the far-field approximation does not exceed π/8 [13].When θ = ϕ = π 2 , it is given by r R = 2D 2 λ .Moreover, if the first three terms in (8) are considered, the following Fresnel approximation can be obtained [14] The Fresnel distance is defined as the distance required such that the phase error of the channel caused by the Fresnel approximation does not exceed π/8 [14].When θ = ϕ = π 2 , it is given by r F = 0.5 D 3 λ . As shown in Fig. 5, scatterers in the environment can cause multipath propagation in near-field channels, where the receiver also receives signals reflected by scatterers via nonline-of-sight (NLoS) paths.The randomness of these multipath NLoS components results in the stochastic behaviour of channels and consequently requires a statistical channel model.Specifically, the channel between the transmitter and the scatterers can be regarded as a MISO channel.Let L denote the total number of scatterers, rℓ denote the coordinate of the ℓ-th scatterer, and βℓ denote the channel gain, which also includes the impact of the random reflection coefficient of the ℓ-th scatterer.Then, the near-field multipath channel can be modelled as follows: Near-Field Multipath MISO Channel (LoS + NLoS) In (11), the random phase of βℓ is assumed to be independent and identically distributed (i.i.d.) and uniformly distributed in (−π, π].The performance of NFC systems in near-field multipath channels will be discussed in detail in Section IV. The above analysis reveals that near-field MISO channels are characterized by the array response vector in (7).However, it is non-trivial to capture the characteristics of near-field channels based on the general expression (7), which entails mathematical difficulties for performance analysis and system design.To obtain more insights, we discuss the near-field array response vectors of two popular antenna array geometries, namely uniform linear array (ULA) and uniform planar array (UPA). • Uniform Linear Array: A ULA is a one-dimensional antenna array arranged linearly with equal antenna spacing.To derive the simplified array response vector, we consider a MISO system with N antennas, where N = 2 Ñ + 1, at the transmitter.The spacing between adjacent antenna elements is denoted by d.For the ULA, we can always create a coordinate system such that all transmit antenna elements and the receiver are located in the x-y plane as shown in Fig. 6.Therefore, we can ignore the z axis and set ϕ = 90 • .Then, by putting the origin of the coordinate system into the center of the ULA, the coordinates of the receiver and the n-th element of the ULA are given by r = [r cos θ, r sin θ] T and s n = [nd, 0] T , ∀n ∈ {− Ñ , . . ., Ñ }, respectively.In this case, the propagation distance ∥r − s n ∥ can be approximated as follows: Here, we exploit the Fresnel approximation (10) in the last step.Then, we obtain the following simplified near-field array response vector for ULAs by substituting ( 12) into (7): Near-Field Array Response Vector for ULAs • Uniform Planar Array: A UPA is a two-dimensional array of antennas uniformly arranged in a rectangular grid.We consider a MISO system with a UPA deployed in the xz-plane as illustrated in Fig. 7. Assuming the UPA is located in the x-z plane and is composed of N = N x × N z antenna elements, where N x = 2 Ñx + 1 and N z = 2 Ñz + 1.The antenna Fig. 6.System layout of near-field MISO system with ULA.Fig. 7. System layout of near-field MISO system with UPA.spacings along the two directions are denoted by d x and d z , respectively.Then, the Cartesian coordinates of the receiver and the (m, n)-th element of the transmit antenna array are given by r = (r cos θ sin ϕ, r sin θ sin ϕ, r cos ϕ) and s m,n = (nd x , 0, md z ), ∀n ∈ {− Ñx , . . ., Ñx }, m ∈ {− Ñz , . . ., Ñz }, respectively.Assuming d x /r ≪ 1 and d z /r ≪ 1, the distance ∥r − s m,n ∥ can be approximated as follows: where the last step is obtained by exploiting Fresnel approximation (10) and omitting the bilinear term.This approximation is sufficiently accurate for the USW model.After removing the constant phase e −j 2π λ r , the phase of the array response vector can be divided into two components, namely −nd x cos θ sin ϕ + , which only depend on m and n, respectively.Then, we obtain the following result: Near-Field Array Response Vector for UPAs ) , (15b) Furthermore, according to (4), the well-known far-field array response vectors of ULAs and UPAs can be calculated, which are respectively given by: a far UPA (θ, ϕ) = e −j 2π λ Ñxdx cos θ sin ϕ , . . ., e j 2π λ Ñxdx cos θ sin ϕ T ⊗ e −j 2π λ Ñzdz cos ϕ , . . ., e j 2π λ Ñzdz cos ϕ T .(17 By comparing the far-field array response vectors in ( 16), (17) with the near-field array response vectors in ( 13) and ( 15), we obtain the following insights: • Near-field channels dependent on both angle and distance.This is the major difference between near-field and far-field channels.Compared with far-field channels, the additional distance dependence provides additional DoFs for NFC system design.• Far-field channels are special cases of near-field channels.It can be observed that the far-field array response vectors in ( 16) and ( 17) can be obtained by omitting the phase terms that include d 2 r in ( 13) and (15).This implies that when r is sufficiently large such that these terms become negligible, the near-field channel reduces to a far-field channel. 2) MIMO Channel Model: We continue to discuss MIMO systems.Let us consider a MIMO system consisting of an N T -antenna transmitter and an N R -antenna receiver, where N R = 2 ÑR + 1 and N T = 2 ÑT + 1.The antenna indices of the transmit and receive antenna arrays are given by n ∈ {− ÑT , . . ., ÑT } and m ∈ {− ÑR , . . ., ÑR }.Let r m = [r m x , r m y , r m z ] T and s n = [s n x , s n y , s n z ] T denote the Cartesian coordinates of the m-th element of the receive antenna array and the n-th element of the transmit antenna array, respectively, and let r = ∥r 0 − s 0 ∥ denote the distance between the central elements of the receive and the transmit antenna array.Furthermore, s 0 = [0, 0, 0] T is again the origin of the coordinate system. For the planar-wave-based far-field channel, let θ and ϕ denote the azimuth and elevation angles of the receiver with respect to the x-z plane, respectively.Then, for the farfield link between the m-th receive antenna and the nth transmit antenna, the propagation distance is given by r m,n = r + k T (θ, ϕ)s n + k T (θ, ϕ)(r m − r 0 ).By defining rm = r m − r 0 , the resulting channel coefficient is given by h m,n far (θ, ϕ, r) = β m,n e −j 2π λ r e j 2π λ k T (θ,ϕ)sn e j 2π λ k T (θ,ϕ)rn , (18) Similar to the MISO case, the channel gains β m,n are assumed to have identical values of β due to the large propagation distance in the far-field region.According to (18), the farfield LoS MIMO channel can be divided into the transmitside component e −j 2π λ k T (θ,ϕ)sn and the receive-side component e −j 2π λ k T (θ,ϕ)rn .Therefore, it can be modelled as the multiplication of transmit and receive array response vectors as follows: Far-Field LoS MIMO Channel Here, β = βe −j 2π λ r denotes the complex channel gain, and a T far (θ, ϕ) and a R far (θ, ϕ) denote the transmit and receive array response vectors, respectively, which are given as follows: It is worth noting that the far-field LoS MIMO channel in (19) always has rank one, leading to low DoFs, i.e., For the near-field spherical-wave-based channel, similar to the MISO case, the propagation distance of each link in the LoS channel has to be calculated more accurately as r m,n = ∥r m − s n ∥, resulting in the following channel coefficient: If distance r is larger than the uniform-power distance, the channel gains β m,n are approximated to have the same value β.In contrast to the far-field LoS MIMO channel, the nearfield LoS MIMO channel cannot be decomposed into transmitside and receive-side components.Therefore, the near-field LoS MIMO channel needs to be modelled as In particular, the near-field LoS MIMO channel matrix typically has high rank due to the non-linear phase [32], resulting in high DoFs, i.e., Furthermore, multipath propagation will also occur in nearfield MIMO channels if scatterers are present in the environment.Near-field multipath propagation is illustrated in Fig. 8.As can be observed, the NLoS MIMO channel can be regarded as the combination of two MISO channels with respect to the transmitter and receiver, respectively.Therefore, it can be written as the multiplication of the near-field array response vectors at the transmitter and receiver.Therefore, the near-field multipath channel can be modelled as follows: Here, the near-field transmit array response vectors a T (r ℓ ) and receive array response vector a R (r ℓ ) are defined as follows: In a rich scattering environment, i.e., L ≫ 1, the MIMO channel in (25) can achieve full rank due to the random phaseshifts imposed by scatterers, i.e., for the case of SDP antennas, the DoFs are given by: It can be observed that near-field NLoS MIMO channels have a similar structure as far-field MIMO channels, which can be written as the multiplication of transmit and receive array response vectors.However, near-field LoS MIMO channels have a significantly different structure.In the following, to obtain more insights into near-field LoS MIMO channels, we discuss two specific antenna array geometries, namely parallel ULAs and parallel UPAs. • Parallel ULAs: We consider the MIMO system shown in Fig. 9, with an N T -antenna ULA at the transmitter and an N Rantenna ULA at the receiver, where N T = 2 ÑT +1 and N R = 2 ÑR +1.In particular, the two ULAs are parallel to each other.The spacing between adjacent transmit and receive antennas is denoted by d T and d R , respectively.The angle and distance of the center of the receive ULA with respect to the center of the transmit ULA are denoted by θ and r, respectively.According to the system layout in Fig. 9, the Cartesian coordinates of the n-th elements at the transmitter and the m-th elements at the receiver are s n = (nd T , 0), ∀n ∈ {− ÑT . . .ÑT }, and r m = (r cos θ − md R , r sin θ), ∀m ∈ {− ÑR . . .ÑR }, respectively.Therefore, the distance ∥r m − s n ∥ can be approximated as Tx Rx Fig. 9. System layout of near-field MIMO system with two parallel ULAs. follows: where the Fresnel approximation (10) is exploited in step (a).It can be observed that (29) involves three components, namely −nd T cos θ + , and nd T md R sin 2 θ r , where the first two components depend only on n and m, respectively, while the last one involves both n and m.Therefore, the near-field LoS MIMO channel matrix for parallel ULAs can be expressed via the ULA array response vectors in (13) and an additional coupled component as follows: Near-Field LoS MIMO Channel for Parallel ULAs where β = βe −j 2π λ r denotes the complex channel gain. Remark 2. As can be observed in (30), the near-field LoS MIMO channel matrix between parallel ULAs includes an additional coupled component, i.e., H c , which cannot be decomposed into the multiplication of transmitter-side and receiver-side array response vectors.Due to the presence of this coupled component, near-field LoS MIMO channels exhibit higher DoFs than far-field LoS MIMO channels. For two parallel ULAs, the DoFs of the near-field LoS MIMO channel can be calculated through diffraction theory or eigenfunction analysis, which leads to [32]: As can be observed, if the numbers of transmit and receive antennas are large enough, the DoFs between parallel ULAs are given by (N T −1)d T (N R −1)d R λr . • Parallel UPAs: The near-field LoS MIMO channel matrix for two parallel UPAs can be calculated in a similar manner as that for two parallel ULAs.Assume that the transmit UPA is deployed in the xz-plane and is composed of antenna elements with spacing d x T and d z T along the xand zdirections, and the receive UPA is parallel to the transmit UPA and is composed of N R = N x R × N z R antenna elements with spacing d x R and d z R along the two directions.More particularly, The antenna element indices of transmitter and receiver are denoted as (m, n), ∀m ∈ {− Ñ x T , . . . , respectively.Then, we have the following results: Near-Field LoS MIMO Channel for Parallel UPAs Similar to the case of parallel ULAs, the above near-field LoS MIMO channel matrix for parallel UPAs also involves a coupled component, i.e., H x c ⊗ H z c , thus resulting in high DoFs.For two parallel UPAs, the DoFs are given by [32]: where T are the apertures of the transmitting and receiving UPAs, respectively.Similar to the case of the parallel ULAs, if N T and N R are sufficiently large, the DoFs are given by 2A T A R (λr) 2 .The main differences between near-field and far-field channels are summarized in Table III. B. Non-Uniform Channel Model for SPD Antennas In the previous subsection, we reviewed the near-field channel model based on spherical waves and highlighted its major differences compared with the far-field channel model.Recall that the near-field channel coefficient between a transmit antenna s n and a receive antenna r m is given by In the previous subsection, we assumed that the propagation distance r = ∥r 0 − s 0 ∥ with respect to the central elements of the antenna arrays is larger than the uniform-power distance, resulting in negligible variations of the channel gains of different links.However, when r is smaller than then uniformpower distance or the antenna aperture is extremely large, the channel gain variations can no longer be neglected.In this case, more accurate channel gain models of near-field channels are required.In the following, we discuss different channel gain models valid under different assumptions. 1) USW Model [30], [31]: We first briefly review the USW model defined in the previous section.In this model, the propagation distance r is larger than the uniform-power distance, resulting in uniform channel gains, i.e., β m,n ≈ β 0,0 , ∀m, n.The channel gain β 0,0 is mainly determined by the free-space path loss, which is given by β 0,0 = 1 √ 4πr 2 .Therefore, the USW model for near-field channels is given by USW Model of Near-Field Channels 2) NUSW Model [33], [34]: In this model, the propagation distance r is smaller than the uniform-power distance, where the channel gain variations are not negligible.As a result, the channel gains of different links are non-uniform and have to be calculated separately.More specifically, the channel gain can also be calculated according to the free-space path loss, which leads to The NUSW model for near-field channels is given as follows: NUSW Model of Near-Field Channels 3) A General Model: Although the NUSW model is more accurate than the USW model within the uniform-power distance, it still fails to capture the loss in channel gain caused by the effective antenna aperture and polarization mismatch, especially when the antenna arrays are of considerable size.The effective antenna aperture characterizes how much power is captured from an incident wave 3 , and the polarization mismatch means the angular difference in polarization between the incident wave and receiving antenna [36], [37].To this end, we introduce a general model for near-field channels, which has three components for the channel gain, namely free-space path loss, effective aperture loss G 1 , and polarization loss G 2 .In the following, we explain how to calculate G 1 and G 2 .Moreover, we assume that the transmit antenna has a unit effective area and the receive antenna is a hypothetical isotropic antenna element whose effective area is given by λ 2 4π [35]- [37]. • Effective Aperture Loss: As the signals sent by different array elements are observed by the receiver from different angles, the resulting effective antenna area varies over the array.The effective antenna area equals the product of the 3 Effective aperture or effective area characterizes the received power of an antenna [35]- [38].Assume that the incident wave has the same polarization as the receive antenna and is travelling towards the antenna in the antenna's direction of maximum radiation (the direction from which the most power would be received).Then, the effective aperture describes how much power is captured from a given incident wave.Let p 0 be the power density of the incident wave (in Watt/m 2 ).Then, the antenna's received power (in Watts) is given by p 0 Ae.An antenna's effective area or aperture is defined for reception.However, due to reciprocity, an antenna's directivities for reception and transmission are identical, so the power transmitted by an antenna in different directions is also proportional to the effective area [35]- [38].maximal value of the effective area and the projection of the array normal to the signal direction.Let ûs denote the normalized normal vector of the transmitting array at point s n .For example, when the transmitting array is placed in the x-z plane, we have ûs = [0, 1, 0] T .Then, the power gain due to the effective antenna aperture is given as follows [39]: • Polarization Loss: The polarization loss is also caused by the fact that the receiver sees the signals sent by different array elements from different angles.The power loss due to polarization is defined as the squared norm of the inner product between the receiving-mode polarization vector at the receive antenna and the transmitting-mode polarization vector of the transmit antenna. Lemma 1.The polarization gain factor for transmit antenna s and receive antenna r can be expressed as follows: e(s, r where ρ T w (r) denotes the normalized receiving-mode polarization vector of the receive antenna and Ĵ(s) denotes the normalized electric current vector at the transmit antenna. Proof.Please refer to Appendix A. ■ Remark 3. It is worth noting that the influence of polarization loss was also considered in [40], [41].Yet, the derived results apply when the receiving-mode polarization vector of the receive antenna and the normalized electric current vector is along the y-axis.By contrast, (38) applies to arbitrary ρ w (r) and Ĵ(s), which yields more generality. Taking into account the free-space path loss, effective aperture loss G 1 , and polarization loss G 2 , the general model for the near-field channel coefficients is given by Note that all three loss components are functions of the position of the transmit antenna, s n , and the position of the receive antenna, r m .In fact, the above-mentioned USW and NUSW channel models are special cases of the general model given in (40), which is further explained in detail in Section IV. 4) Uniform-Power Distance: According to the previous discussion, the uniform-power distance is an important figure of merit distinguishing the region where the USW model is sufficiently accurate compared with the NUSW model and general model.In particular, the uniform-power distance can be defined based on the ratio of the weakest and strongest channel gains of the NUSW model or the general model.Let us take the general model as an example, where the channel gains of the links are given by Then, the uniform-power distance r UPD can be defined as follows [25], [26]: where Γ is the minimum threshold for the ratio.Generally, the value of Γ is selected to be slightly smaller than 1.In this case, when r ≥ r UPD , all channel gains β m,n , ∀m, n, have comparable values, indicating that the USW model is sufficiently accurate.Additionally, the uniform-power distance r UPD can also be approximately calculated based on the NUSW model, for which the channel gain is given as follows: In Fig. C. Green's Function-Based Channel Model for CAP Antennas Near-field channel modelling for CAP antennas is much more challenging than that for SPD antennas.In this subsection, we consider the scenario where both transmitter and receiver are equipped with CAP antennas, which is an analogy to the MIMO scenario for SPD antennas.In contrast to the case of SPD antennas, CAP antennas support a continuous distribution of source currents, denoted by J(s), where s is the source point within the transmitting volume V T .The electric radiation field E(r) can be formulated as follows [42]: where G(s, r) is the tensor Green's function and r denotes the field point (location of the receiver).For simplicity, we consider the case where the wireless signal is vertically polarized.As a result, the equivalent electric currents induced within V T are in y-direction, i.e., J(s) = ûy J y (s) and the electric field they generate at the receiver is E(r) = ŷE y (r). According to (44), this received field is given as follows [43]: where G yy is the (y, y)-th element of the Green's tensor.For free-space transmission, we have: where µ 0 and ϵ 0 are the free space permeability and permittivity, respectively, ω is the angular frequency of the signal, and k 0 is the wave number.It is worth noting that the Green's function in (46) can be further simplified using different approximations.In fact, the approximations used to arrive at the UPW, USW, and NUSW models for SPD antennas can also be applied to the Green's function for CAP antennas.We will further elaborate on this in Section IV-A3.To obtain the channel gain between CAP antennas, we evaluate the overall received signal power over the receiving volume (V R ), i.e., Then, by substituting ( 45) into ( 47) and denoting the normalized current distribution within V T by Jy (s), we obtain the end-to-end channel gain between two CAP antennas as follows [32]: Green's Function-Based Near-Field Channel Gain for CAP Antennas where < Jy (s), Jy (s) >= Vt Jy (s) J * y (s)ds = 1 holds, V R denotes the receiving volume, and Due to the presence of multiple orthogonal current distributions across the transmit and receive apertures, we can encode data streams into these orthogonal currents to realize D CAP parallel channels [40]: where h n is the channel coefficient of the n-th parallel channel, x n is the input symbol for the n-th channel which is associated with current distribution J n (s), w n is the additive noise at the receiver, and n ∈ {1, • • • , D CAP } is the index of the parallel channels.Note that the relation between |h n | 2 and J n (s) is given by (48).In order to determine the orthogonal currents J n (s), which is necessary to exploit the corresponding DoF, we consider the following eigenvalue problem, where kernel function K is a Hermitian operator: Since K(s 1 , s 2 ) is a Hermitian operator, the eigenvalue problem in (51) has eigenfunctions {J 1 , J 2 , • • • }, and the corresponding eigenvalues are The DoFs of the near-field channel model for CAP antennas are analyzed as follows.The channel gains between two CAP antennas can be calculated as the eigenvalues of (51).Thus, the DoFs are equal to the number of eigenfunctions (J n ) corresponding to non-negligible eigenvalues (h n ), so the resulting channel is useful.Particularly, for wireless communication between two rectangular prism CAP antennas, the available DoFs are given as follows [43]: where r is the communication distance (defined as the distance between the centers of the two CAP antennas) and ∆z T /R is the width of the transmitting/receiving volume of the CAP antennas.As can be observed from ( 52), the DoFs of CAP antennas not only increase with the aperture size but also depend on the carrier frequency (inverse of wavelength) and the communication distance. D. Discussion and Open Research Problems In this section, we reviewed the most important near-field channel models and introduced general channel models for SPD and CAP antennas.The discussed near-field models are interrelated, specifically, the simplified USW models for ULAs and UPAs are derived by adopting the Fresnel approximation for the USW model.In addition, as shown in Fig. 3, the UPW and USW models are special cases of the NUSW model and the general model.Thus, the UPW, USW, NUSW, and general models have increasing levels of accuracy and complexity.Although accurate models for free-space deterministic nearfield channels have been established in this section, see also [30]- [34], [42], [43], statistical channel models for nearfield multipath fading still remain an open problem.Further research on statistical near-field channel modelling is required to fully describe the behaviour of near-field channels.Some of the key research directions are as follows: • Accurate and compact statistical channel models for SPD antennas: New statistical models need to be developed which capture the complex dynamics of the nearfield channel, such as the impact of obstacles, reflections, and diffractions.However, it is difficult to explicitly model each multipath component of a near-field NLoS channel.Compact statistical channel models are needed to capture both the multipath effect and the near-field effects.Another challenge is to develop accurate models for the reactive near-field region, where evanescent waves are dominant.• Statistical channel models for CAP antennas: For CAP antennas, existing channel models are based on the Green's function method [42], [43].However, the Green's function method is non-trivial to use in scattering environments.This is because the explicit modelling of the signal sources is difficult for the multipath components caused by scatterers.Developing statistical channel models for CAP antennas remains an open problem.• Validation of existing models by channel measurements: Further validation of statistical channel models for NFC is required using empirical measurements [44].Specifically, channel measurements involve measuring the received signal strength, time delay, and phase shift of the signals.Overall, validating and verifying channel models using channel measurements is an iterative process that requires careful experimentation, analysis, and comparison with the discussed models. III. NEAR-FIELD BEAMFOCUSING AND ANTENNA ARCHITECTURES In wireless communications, beamforming is used to enhance signal strength and quality by directing the signal towards the intended receiver using an array of antennas.This requires adjusting the phase and amplitude of each antenna element to create a constructive radiation pattern, rather than radiating the signal uniformly.Compared with FFC beamforming, NFC introduces a new beamforming paradigm, referred to as beamfocusing.In this section, we present the properties of near-field beamfocusing and then discuss various antenna architectures for narrowband and wideband communication systems, which is followed by an introduction to near-field beam training. A. Near-Field Beamfocusing The near-field array response vector under the spherical wave assumption depends on both the angle and distance between transmitter and receiver, c.f., (7), (13), and (15).By taking advantage of this property, NFC beamforming can be designed to act like a spotlight, allowing focusing on a specific location in the polar domain.This is known as beamfocusing, and is different from FFC beamforming.In FFC, beamforming can only be used to steer the transmitted signal in a specific direction in the angular domain, similar to a flashlight, which is known as beamsteering.To further elaborate, we consider a ULA with N transmit antenna element, where N = 2 Ñ + 1. For illustration, we take the USW model in (13) as an example.In this case, the n-th element of the near-field array response vector, where n ∈ {− Ñ , . . ., Ñ }, can be written as By taking the above array response vector model as an example, in the following, we introduce two important properties of near-field beamfocusing, namely asymptotic orthogonality and depth of focus. 1) Asymptotic Orthogonality: The near-field array response vector demonstrates an asymptotic orthogonality [45], which implies that the correlation between two array response vectors tends to zero when the number of antenna elements N is sufficiently large.Mathematically, this can be expressed as follows: In Fig. 11, we depict the correlation of array response vector a(θ, r) with all other possible array response vectors in the two-dimensional space when θ = π 4 and r = 5.5 m.As observed, as N increases, vector a(θ, r) gradually becomes orthogonal to all other array response vectors, particularly in the distance domain.This asymptotic orthogonality is fundamental to near-field beamfocusing.For instance, consider a two-user communication system, where two single-antenna users are located at (θ 1 , r 1 ) and (θ 2 , r 2 ), respectively.By assuming an LoS-only communication channel, the received signal at user 1 can be expressed as where β 1 is the channel gain, f 1 and f 2 represent the beamformers for the two users, s 1 and s 2 are the desired signals of the two users, and n 1 denotes the complex Gaussian noise at user 1 with a power of σ 2 1 .Then, the signal-to-interferenceplus-noise ratio (SINR) for decoding s 1 at user 1 is given by According to the asymptotic orthogonality in (54), the beamformers can be designed as , where P 1 and P 2 denotes transmit powers allocated to user 1 and user 2, respectively.Consequently, for a sufficiently large N , the SINR can be calculated as where the second equality stems from the fact that |a T (θ 1 , r 1 )a * (θ 1 , r 1 )| = N and the approximation in the last step is due to the asymptotic orthogonality in (54).Drawing from the aforementioned analysis, we can deduce that in NFC, it is possible to focus the desired signal of a specific user precisely at the intended location without introducing interference to other users situated elsewhere.This implies that beamfocusing can be achieved.Compared to far-field beamsteering, for near-field beamfocusing, the distance can also contribute to the asymptotic orthogonality of array response vectors.Thus, inter-user interference can also be effectively mitigated even if the users are located in the same direction. 2) Depth of Focus: In the previous section, we discussed the asymptotic orthogonality when the number of antennas tends to infinity.However, in practice, the number of antennas is limited, which implies that the orthogonality between two different near-field array response vectors cannot be strictly achieved.Depth of focus is an important metric for evaluating the attainability of the orthogonality of near-field array response vectors in the distance domain [21], [31].This characteristic sets near-field beamfocusing apart from far-field beamsteering. Let us take the beamformer f = P N a * (θ, r) as an example, where P denotes the transmit power.We aim to find an interval [r min , r max ] of maximum length, such that for r 0 ∈ [r min , r max ], the following condition holds: where Γ DF is a desired threshold.Then, the depth of focus is defined as the length of this interval, i.e., DF = r max − r min . From the SINR perspective, when a user is located in the same direction but out of the depth of focus, the interference generated by beamformer f is relatively small.Consequently, a smaller depth of focus indicates better beamfocusing performance.Typically, the depth of focus is calculated based on a 3 dB criterion, i.e., Γ DF = 1 2 .The 3 dB depth of focus of beamformer f in different directions is given in the following lemma. Lemma 2. (Depth of focus) The 3 dB depth of focus of beamformer f = P N a * (θ, r) in direction θ is given by where and η 3dB = 1.6. Proof. Please refer to Appendix B. ■ From Lemma 2, we observe that the depth of focus tends to infinity if the focus distance r is larger than the threshold r DF , as shown in Fig. 12.This implies that beamfocusing degenerates to beamsteering, since the orthogonality in the distance domain is almost lost.Therefore, the region within distance r DF is referred to as the focusing region, where beamfocusing is achievable.To obtain more insights, we compare r DF with the Rayleigh distance.Recall that for θ = π 2 , the Rayleigh distance is given by where D = (N − 1)d denotes the aperture of the ULA.When The above result suggests that beamfocusing is not universally achievable in the near-field region.Instead, it can only be achieved within a limited fraction, specifically within onetenth, of the near-field region.For example, consider a BS with a Rayleigh distance of 350 m and a focusing region confined to just 35 m according to the 3 dB depth of focus.Therefore, ELAAs are crucial for near-field beamfocusing as they can realize both a large focusing region and a small depth of focus.When the number of antennas tends to infinity, it can be easily shown that DF 3dB → 0 and r DF → ∞, which implies optimal near-field beamfocusing can be achieved in the full-space.In the following, we discuss beamfocusing for two different ELAA architectures, namely SPD and CAP antennas. B. Beamfocusing with SPD Antennas Conventionally, the signal processing in multi-antenna systems is carried out in the baseband, i.e., fully-digital signal processing.However, this is not possible for near-field beamfocusing in practice since ELAAs and extremely high carrier frequencies are needed, where the number of power-hungry radio-frequency (RF) chains has to be kept at a minimum [46].On the other hand, purely analog processing causes a loss in performance compared to digital processing.As a remedy, hybrid beamforming has been proposed as a practical solution to address this issue.In hybrid beamforming, a limited number of RF chains are utilized by a low-dimensional digital beamformer, followed by a high-dimensional analog beamformer [47]- [50].Generally, the analog components are power-friendly and easy to implement.In the following, we discuss different hybrid beamforming architectures for narrowband and wideband systems to facilitate near-field beamfocusing. 1) Narrowband Systems: Let us consider a BS equipped with an N -antenna ULA, where N = 2 Ñ + 1, serving K where x ∈ C K×1 contains the information symbols for the K users and n k ∼ CN (0, σ 2 k ) is additive Gaussian white noise (AWGN).F RF ∈ C N ×NRF and F BB ∈ C NRF×K denote the analog beamformer and the baseband digital beamformer, respectively.h k ∈ C N ×1 denotes the multipath NFC channel comprising one LoS path and L k resolvable NLoS paths for user k.According to (11) and (53), adopting the USW model and ULAs, the multipath channel h k is given as follows: where β k and βk,ℓ denote the complex channel gain of the LoS link and the ℓ-th NLoS link, respectively, (θ k , r k ) is the location of user k, and ( θk,ℓ , rk,ℓ ) is the location of the ℓ-th scatterer for user k.In NFC, the analog beamformer realizes an array gain by generating beams focusing on specific locations, such as the locations of the users and scatterers, while the digital beamformer is designed to realize a multiplexing gain.The analog beamformer F RF is subject to specific constraints imposed by the hardware architecture of the analog beamforming network.In the following, we first briefly review the conventional phase-shifter (PS) based hybrid beamforming architecture. In PS-based hybrid beamforming architectures, the RF chains can be either connected to all antennas (referred to as fully-connected architectures) or a subset of antennas (referred to as sub-connected architectures) via PSs, as shown in Fig. 13(a) and Fig. 13(b), respectively.The respective analog beamformers can be expressed as where f ×1 denotes the analog beamformers for the n-th RF chain in the fullyconnected and sub-connected architectures, respectively.The required number of PSs for these two architectures is N RF N and N , respectively.Recalling the fact that PSs can only adjust the phase of a signal, the unit-modulus constraint has to be satisfied, which implies For the PS-based hybrid beamforming architecture, the achievable rate for each user k is given by where f BB,k denotes the k-th column of F BB .Then, the analog and digital beamformers can be designed to achieve different objectives with respect to R k , such as maximizing spectral efficiency or energy efficiency.However, due to the coupling between F RF and F BB and the constant-modulus constraint on the elements of F RF , it is not easy to directly obtain the optimal analog and digital beamformers.In the following, two alternative approaches are introduced to solve hybrid beamforming optimization problems, namely fully-digital approximation [46]- [49] and heuristic two-stage optimization [51]- [54]. • Fully-digital Approximation: This approach aims to minimize the distance between the hybrid beamformer and the unconstrained fully-digital beamformer F opt , which can be effectively obtained via existing methods such as successive convex approximation (SCA) [55], weighted minimum mean square error (WMMSE) [56], and fractional programming (FP) [57].Then, we only need to solve the following optimization problem, where we take the fully-connected hybrid beamforming architecture as an example with P max denoting the maximum transmit power: The existing literature suggests that fully-digital approximation can achieve near-optimal performance [46].In particular, several methods have been proposed to solve problem (68) through orthogonal matching pursuit (OMP) [47], manifold optimization [48], and block coordinate descent (BCD) [49].Furthermore, it has been proved in [50] that the Frobenius norm in problem ( 68) can be made exactly zero when the number of RF chains is not less than twice the number of information streams, i.e., N RF ≥ 2K.However, it is important to note that the complexity of this approach can be exceedingly high, especially when the number of antennas is extremely large, e.g., N = 512.On the one hand, this approach requires the acquisition of the optimal F opt , which has the potentially large dimension N ×K.On the other hand, the design of F RF by solving problem (68) also involves a large number of optimization variables.The resulting potentially high computational complexity can present a practical challenge in real-world applications.• Heuristic Two-stage Optimization: This approach involves a two-step process for designing beamformers.Firstly, the analog beamformer F RF is implemented using a heuristic design to generate beams focused at specific locations.Following this, the digital beamformer F BB is optimized with reduced dimension N RF ×K.One popular solution for the analog beamformer is to design each column F RF such that the corresponding beam is focused at the location of one of the users through the LoS path.Hence, for N RF = K, the following closed-form solution can be obtained for the full-connected and sub-connected architectures, respectively: For the resulting F RF , the equivalent channel g k ∈ C NRF×1 for baseband processing can be obtained as follows: Then, the achievable rate of user k is given as follows: As a result, the optimization of F BB can be regarded as a reduced-dimension fully-digital beamformer design problem.It can be solved with existing methods [55]- [57]. Compared to the fully-digital approximation approach, the heuristic two-stage approach exhibits much lower computational complexity due to the closed-form design of the analog beamformers and the low-dimensional optimization of the digital beamformers.2) Wideband Systems: In wideband communication systems, orthogonal frequency division multiplexing (OFDM) is usually adopted to effectively exploit the large bandwidth resources and overcome frequency-selective fading.Let W , f c , and M c denote the system bandwidth, central frequency, and the number of subcarriers of the OFDM system, respectively.The frequency for subcarrier m is Adopting the USW model for ULAs, the frequency-domain channel for user k at subcarrier m after discrete Fourier transform (DFT) is given by [28], [58] hm where β m,k and βm,k,ℓ represent the complex channel gain for subcarrier m of the LoS and NLoS paths, respectively.We note that the in (73), near-field array response vector a(f m , θ, r) is rewritten as a function of the carrier frequency due to its frequency-dependence.More specifically, for the array response vector in (53), the phase is inversely proportional to the signal wavelength λ, and thus it is proportional to the signal frequency f .In practice, the antenna spacing is usually fixed to half a wavelength of the central frequency d = λc 2 = c 2fc , where c denote the speed of light.Thus, the array response vector for subcarrier m can be expressed as follows: where . From (74), it can be observed that the wideband near-field array response vector is frequency-dependent, thus leading to frequency-dependent communication channels for different OFDM subcarriers.Then, if the conventional PS-based hybrid beamforming architecture is used, the received signal at subcarrier m of user k is given by where xm,k ∈ C K×1 and ñm,k ∼ CN (0, σ 2 m,k ) denote the vector containing the information symbols for the K users and the AWGN at subcarrier m, respectively.F RF ∈ C N ×NRF and F m BB ∈ C NRF×K denote the analog beamformer and the lowdimensional digital beamformer for subcarrier m, respectively.It can be observed that the analog beamformer F RF realized by PSs is frequency-independent, which causes a mismatch with respect to the frequency-dependent wideband communication channel.This mismatch leads to the so-called near-field beam split effect [22], [59].In other words, a PS-based analog beamformer focusing on a specific location for one subcarrier will focus on different locations for other subcarriers.To elaborate on this phenomenon, we assume that one column f RF of F RF is designed such that the beam is focused on location (θ c , r c ) at the central frequency f c , i.e., Then, the impact of the near-field beam split effect is unveiled in the following lemma. Proof.Please refer to Appendix C. ■ Lemma 3 implies that the beam generated by f RF = a * (f c , θ c , r c ) will focus on location (θ m , r m ) for subcarrier m, rather than the desired location (θ c , r c ).The resulting beam split effect is illustrated in Fig. 14.If subcarrier frequency f m deviates significantly from the central frequency f c , a substantial loss of array gain occurs due to the beam split effect. To mitigate the performance degradation induced by beam split, an efficient method is to exploit true time delayers (TTDs) [60]- [62], which, similar to PSs, are analog components.TTDs enable the realization of frequency-dependent phase shifts by introducing variable time delays of the signals.Specifically, a time delay t in the time domain corresponds to a frequency-dependent phase shift of e −j2πfmt at subcarrier m in the frequency domain.Therefore, TTDs can be exploited to realize frequency-dependent analog beamforming, which is a viable approach to mitigate the near-field beam split phenomenon.A straightforward way to implement such beamformers is to replace all PSs with TTDs in the conventional hybrid beamforming architecture.However, the cost and power consumption of TTDs is much higher than that of PSs, which makes this strategy impractical.As a remedy, several TTDbased hybrid beamforming architectures leveraging both PSs and TTDs have been proposed [63]- [66], as depicted in Fig. 15 4 .This architecture features an additional low-dimensional time delay network comprising a few TTDs, which is inserted between the digital and high-dimensional PS-based analog beamforming architectures.For the TTD-based hybrid beamforming architecture, the received signal at subcarrier m of user k is given by where F PS denotes the frequency-independent analog beamformer realized by PSs and T m denotes the frequencydependent analog beamformer realized by TTDs. Similar to PS-based hybrid beamforming, TTD-based hybrid beamforming can be classified into two architectures: • Fully-connected Architecture: As illustrated in Fig. 15(a), in the fully-connected architecture, each RF chain is connected to all antenna elements through TTDs and PSs based on the following strategy.Each RF chain is first connected to N T TTDs, and then each TTD is connected to a subarray of N/N T antenna elements via N/N T PSs.Therefore, the number of PSs in the TTD-based hybrid beamforming architecture is the same as in the conventional PS-based hybrid beamforming architecture, i.e., N PS = N RF N .The number of TTDs in this architecture is given by N TTD = N RF N T .The analog beamformers realized by the PSs and TTDs in the fullyconnected architecture can be expressed as, respectively, Here, F (full) n ∈ C N ×N T represents the PS-based analog beamformer connected to the n-th RF chain via TTDs and is given by with f ×1 denoting the PS-based analog beamformer connecting the ℓ-th TTD and the n-th RF chain.The constant-modulus constraint needs to be sat-isfied for each element of f Furthermore, t denotes the time delays realized by the TTDs connected to the n-th RF chain.In practice, the maximum time delay that can be achieved by TTDs is limited, yielding the following constraint: where t max denotes the maximum delay that can be realized by the TTDs.For ideal TTDs, we have t max = +∞.• Sub-connected Architecture: As shown in Fig. 15(b), in the sub-connected architecture, each RF chain is connected to a subarray of antenna elements via TTDs and PSs [59].The small number of antenna elements in each subarray reduces the beam split effect across the subarray.Consequently, the number of TTDs required for each RF chain can be significantly reduced to the point where only one TTD may suffice.In the following, we present the signal model for the simplest case, where each RF chain is connected to a single TTD.In particular, the analog beamformers realized by PSs and TTDs are, respectively, given by where f ∈ [0, t max ] denote the coefficients of the PSs and TTD connected to the n-th RF chain, respectively.By exploiting TTD-based hybrid beamforming, for both fully-connected and sub-connected architectures, the achievable rate of user k at subcarrier m is given by TTD-based hybrid beamforming can again be designed based on fully-digital approximation and heuristic two-stage optimization.These two approaches are detailed as follows. • Fully-digital Approximation: In this approach, the optimal unconstrained fully-digital beamformer F opt m is designed for each subcarrier m.Then, the TTD-based hybrid beamformer is optimized to minimize the distance to F opt m .The resulting optimization problem is given as follows, where we take the fully-connected architecture as an example with P m max denoting the maximum transmit power available at subcarrier m: Compared to the conventional PS-based hybrid beamforming design problem for narrowband systems in (68), problem ( 87) is more challenging due to the following reasons.On the one hand, the three sets of optimization variables F BB , f n,j , and t n are deeply coupled.On the other hand, the exponential form of T m with respect to t n adds a further challenge to this problem.Furthermore, this approach may lead to high complexity because it requires the design of the large-dimensional F opt m over a large number of subcarriers and optimizing the largedimensional F PS and T m .The development of efficient algorithms for solving problem ( 87) is still in its infancy. • Heuristic Two-stage Optimization: In this approach, the complexity is significantly reduced by designing the analog beamformer heuristically in closed form.Then, the low-dimensional digital beamformer can be optimized with low complexity.In contrast to conventional hybrid beamforming, the analog beamformers realized by PSs and TTDs need to be jointly designed in TTD-based hybrid beamforming, such that the beams of all subcarriers are focused on the desired location.For wideband FFC systems, several heuristic designs for analog beamformers have been reported in recent studies [64], [65] to address the issue of beam split effect.However, these designs are not applicable in near-field scenarios.As a further advance, the authors of [66] developed a novel heuristic approach to mitigate the near-field beam split effect based on a far-field approximation for each antenna subarray.However, this approach only accounts for a single RF chain.Therefore, additional research is needed to develop general heuristic designs for TTD-based hybrid beamforming architectures in wideband NFC. In Fig. 16(a), we compare the spectral efficiency of TTDbased hybrid beamforming with fully-digital beamforming and conventional hybrid beamforming for a wideband OFDM system as a function of the SNR.In particular, the conventional hybrid beamformer is configured to generate a beam focused on the user's location at the central frequency.The TTDbased hybrid beamformer is designed based on the far-field approximation proposed in [66].As can be observed, the fullydigital beamformer achieves the highest spectral efficiency due to its ability to generate a dedicated beam for each subcarrier.However, this is at the cost of extremely high power consumption.Thanks to its capability to realize frequencydependent analog beamforming, the TTD-based hybrid beamformer achieves a performance comparable to that of the fullydigital beamformer.In contrast, the conventional PS-based hybrid beamforming architecture has a low performance due to the significant beam split effect.The benefits of TTD-based hybrid beamforming are further illustrated in Fig. 16(b), where the spectral efficiencies of different subcarriers are illustrated.As can be observed, TTD-based hybrid beamforming achieves high spectral efficiency for all subcarriers.However, for conventional PS-based hybrid beamforming, only subcarriers in the proximity of the central frequency can achieve good performance.These results further underscore the importance of TTD-based hybrid beamforming architectures for wideband NFC systems. C. Beamfocusing with CAP Antennas In this subsection, we focus our attention on near-field beamfocusing with CAP antennas.Although CAP antennas generally consist of a continuous radiating surface, achieving independent and complete control of each radiating point on the surface proves to be an insurmountable task.Therefore, sub-wavelength sampling of the continuous surface becomes crucial [68]. Metasurface antennas are a design approach to approximate CAP antennas and are implemented with metamaterials [11], [12], [68].In the context of conventional antenna arrays, it is customary to employ an antenna spacing that is not less than half of the operating wavelength λ.This is mainly attributed to the practical constraints posed by the sub-wavelength size of conventional antennas and the associated mutual coupling effects caused by the closely spaced deployment.However, metamaterials with their unique electromagnetic properties can be utilized to overcome the aforementioned limitations.By exploiting metamaterials, a metasurface antenna with numerous metamaterial radiation elements can be realized allowing for reduced power consumption and ultra-close element spacing on the order of 1 10 λ ∼ 1 5 λ [68].The transmit signals are generally fed to the metasurface antenna by a waveguide [11], [12].The signal from each RF chain propagates in the waveguide to excite the metamaterial elements.Subsequently, the excited metamaterial elements tune the amplitude or phase of the signal before emitting it.Therefore, metasurface antennas can be regarded as a kind of hybrid beamforming architecture, which we refer to as metasurface-based hybrid beamforming architecture. 1) Narrowband Systems: We first focus on narrowband systems.As shown in Fig. 17, the metasurface-based hybrid beamforming architecture is generally composed of N RF feeds connected to N RF RF chains and N reconfigurable metamaterial elements which can tune the signal.Specifically, the signal of each RF chain is fed into a waveguide as a reference signal.Then, the metamaterial element is excited by the reference signal propagating in the waveguide and radiates the object signal.Let xi denote the signal generated by the digital beamformer and emitted by the i-th feed.The reference signal propagated to the n-th metamaterial element through the waveguide is given by [68] x where n r denotes the refractive index of the waveguide, and d n,i denotes the distance between the i-th feed and the n-th metamaterial element.Then, the object signal radiated by the n-th metamaterial element is given by where ψ n denotes the configurable weight of the n-th metamaterial antenna element.Therefore, the overall object signal after the analog and digital beamformer can be modelled as follows where In particular, the configurable weight ψ n represents the Lorentzian resonance response of metamaterial elements, which depending on the implementation can be controlled while meeting the following three sets of constraints [68]: • Continuous amplitude control: In this case, the metamaterial element is assumed to be near resonance, which provides the modality to tune the amplitude of each element without significant phase shifts.Therefore, the configurable weight ψ i is constrained by • Discrete amplitude control: In this case, the amplitude of each metamaterial element can be adjusted based on a set of discrete values.The corresponding constraint of ψ i is given by where C denotes the number of candidate amplitudes.• Lorentzian-constrained phase-shift control: In this case, phase shift tuning of each metamaterial element can be achieved.However, the phase shift and the amplitude of each element are coupled due to the Lorentzian resonance.Therefore, phase shifts and amplitudes need to be jointly controlled based on the following constraint: It can be verified that the phase of ψ n is restricted to the range [0, π] and the amplitude is coupled with the phase, i.e., By exploiting metasurface-based hybrid beamforming in narrowband communication, the received signal at user k is given by We note that since the CAP antenna is sampled by the discrete metamaterial elements of the metasurface, the communication channel h k can be approximated by the near-field multipath MISO channel model developed for SPD antennas in (11).Then, the achievable rate at user k is obtained as follows: The analog beamformer Ψ and the digital beamformer F BB need to be designed properly to maximize system performance.However, unlike the hybrid beamforming for SPD antennas, an unconstrained fully-digital equivalent does not exist for metasurface-based hybrid beamforming in practice, which makes the fully-digital approximation approach for analog beamfocusing design almost infeasible.Therefore, the analog beamformer Ψ and the digital beamformer F BB need to be optimized directly.Let f (Ψ, F BB ) denote some suitable performance metric, such as spectral efficiency or energy efficiency.Then, we have the following optimization problem, where F ψ denotes the feasible set of ψ n given by ( 93), (94), or (95): The main challenges in solving the above problem stem from the high dimensionality of Ψ and the intractable constraints imposed on Ψ, especially for the Lorentzian-constrained phase shift.How to develop efficient algorithms to solve (98) is still an open problem, which requires additional research efforts. 2) Wideband Systems: For wideband NFC systems, the challenges arising from the frequency-dependent array response vector also exist for metasurface-based hybrid beamforming.However, the frequency-dependence of the waveguide has the potential to mitigate the near-field beam split effect.To elaborate, in wideband systems, the received signal at subcarrier m of user k is given by where Q m is defined as Based on this frequency-dependent waveguide propagation matrix, an interference-pattern-based design was proposed in [69] to mitigate the far-field beam split effect by exploiting the metasurface-based hybrid beamforming architecture.How to use the frequency-dependence of the waveguide to mitigate the near-field beam split is an interesting direction for future research. D. MIMO Extensions As discussed in Section II, near-field MIMO channels can provide more DoFs than far-field MIMO channels, especially in the LoS-dominant case.Therefore, MIMO systems can yield substantial performance gains compared to MISO systems in NFC.In particular, in the far-field, the LoS MIMO channel matrix is always rank-one, c.f., (19), and thus can support only a single data stream and cannot fully exploit the benefits of MIMO.By contrast, in NFC, the high-rank LoS MIMO channel matrix is capable of supporting multiple data streams, thus providing more multiplexing gain and enhancing communication performance.Let us consider a near-field narrowband single-user MIMO system with two parallel ULAs with N T and N R antenna elements at the transmitter and receiver, respectively.Employing the hybrid beamforming architecture at the BS, the received signal at the user is given by where H LoS ULA ∈ C N R ×N T denotes the near-field LoS channel matrix for parallel ULAs given in (30) and x ∈ C Ns×1 collects the information symbols of the N s data streams.The maximum number of data streams that can be supported is determined by the DoFs of channel matrix H LoS ULA .However, the primary obstacle in such a system is that the rank of H LoS ULA is not a constant but rather is contingent upon the distance r between the BS and the user.Specifically, as unveiled by (31), when N T and N R are sufficiently large, the DoFs of H LoS ULA are given by (N T −1)d T (N R −1)d R λr , which decrease gradually as distance r increases, reducing the number of data streams that can be supported.Additionally, as pointed out in [50], for the fullyconnected hybrid beamforming architecture, 2N s RF chains are sufficient to achieve the same performance as a fullydigital beamformer, which implies that deploying more than 2N s RF chains will only result in more power consumption without enhancing communication performance.Therefore, in near-field MIMO systems, the number of RF chains needs to be dynamically changed based on distance r to achieve the best possible performance [70].This can be achieved by the hybrid beamforming architecture shown in Fig. 18, where a switching network is introduced to control the number of active RF chains.Let N max RF denote the maximum number of available RF chains.Then, the received signal at the user can be rewritten as follows: where In particular, α n = 1 means that the n-th RF chain is active while α n = 0 indicates that the n-th RF chain is switched off.The achievable rate of the user is given by FH RF denotes the covariance matrix of the transmit signal.Since the purpose of the switching network is to avoid the waste of energy, the power consumption of the RF chains and the PSs needs to be considered.In particular, the total number of active RF chains and PSs is respectively.The corresponding power consumption is given by where P RF and P PS denote the power consumed by one RF chain and one PS, respectively.Then, the optimal FRF , FS , and FBB can be obtained by solving the following multiobjective optimization problem: where µ ≥ 0 is a weight factor that can be adjusted to emphasize the importance of high and low power consumption.Furthermore, the dynamic RF chain concept can also be applied in the context of TTD-based and metasurfacebased hybrid architectures.The corresponding optimization problems can be formulated in a similar manner as (106).Note that problem ( 106) is a mixed-integer nonlinear programming (MINLP) problem, which is NP-hard.The branch-and-bound (BnB) algorithm [71] can be used to obtain the globally optimal solution of MINLP problems, but has exponential complexity.Furthermore, some low-complexity algorithms, such alternating direction method of multipliers (ADMM) [72] and machine-learning-based methods [73], can be employed to obtain a high-quality sub-optimal solution. Based on the discussion in previous subsections, the hybrid beamforming architectures for NFC are summarized in Table IV. E. Near-Field Beam Training In the previous subsections, we have discussed several hybrid beamforming architectures for realizing near-field beam-... ... focusing.However, to optimize the hybrid beamformer, channel state information (CSI) is required.Conventionally, CSI is obtained via channel estimation.However, in the case of NFC employing ELAAs, the complexity of conventional channel estimation techniques significantly increases.To address this challenge, beam training is proposed as a fast and efficient method for reducing the complexity of CSI acquisition and obtaining high-quality analog beamformers [46].Rather than estimating the complete CSI of the high-dimensional nearfield channel, beam training establishes a training procedure between the BS and the users to estimate the physical locations of the channel paths, where an optimal codeword yielding the largest received power at the user is selected from a pre-designed beam codebook.Then, the optimal codeword is exploited as analog beamformer for transmission.Once the analog beamformer is selected, conventional channel estimation methods can be employed to estimate the low-dimension equivalent channel comprising the original channel and the analog beamformer.The communication protocol including beam training is shown in Fig. 19 [74]- [76]. Compared to the far field, near-field beam training imposes new challenges.To elaborate, let us first provide a brief summary of far-field beam training based on hierarchical codebooks [77].Generally, for conventional far-field beam training, the hierarchical codebook contains codewords that correspond to directional beams with varying beamwidths, ranging from wide to narrow.As shown in Fig. 20, these codewords are then searched in a tree architecture to determine the optimal one.In this approach, wider beams are designated as "nodes" in the tree architecture, while narrower beams that are encompassed by the wider beams are regarded as "leaves" of the wider beam.The beam training starts with the BS transmitting a pilot signal to the user using the wider beams and selecting the optimal one with the highest received power at the user.The BS then transmits a pilot signal using the narrow beams that are "leaves" of the selected wider beam and Search the optimal wider beam Search the optimal narrower beam optimal optimal Fig. 20.Far-field beam training process. Search the optimal wider beam Search the optimal narrower beam optimal optimal Fig. 21.Near-field beam training process. repeats this process until the last level of the tree architecture is reached.The design of beam training involves two parts, namely codebook design and training protocol design, which have been widely investigated for FFC [77]- [84].However, far-field beam training methods are not directly applicable for near-field channels due to the absence of distance information in far-field codebooks.Consequently, it is necessary to revise the beam training for NFC.The near-field beam training process based on a hierarchical codebook is illustrated in Fig. 21.Notably, compared to far-field beam training, near-field beam training requires a significantly larger codebook as near-field channels have to the modelled in the polar domain rather than the angular domain.This results in increased complexity of the near-field beam training process.Therefore, employing a low-complexity training protocol is critical for near-field beam training.Furthermore, the design of the codebook for near-field beam training is more intricate than that for far-field beam training due to the non-linear phase or even non-uniform amplitude of near-field channels.In the following, we discuss beam training for narrowband and wideband systems, respectively. • Narrowband systems: Some recent efforts have been devoted to the design of narrowband near-field beam training [85]- [88].Specifically, a new near-field codebook design was proposed in [85], where the codewords are sampled uniformly in the angular domain and nonuniformly in the distance domain.To reduce the complexity of near-field beam training, the authors of [86] proposed a two-phase training protocol.In this protocol, the candidate angle is first obtained via conventional farfield beam training in the first phase, which is followed by the estimation of the effective distance in the second phase.As a further advance, the authors of [87] conceived a deep learning based near-field beam training method, where a pair of neural networks were designed to determine the optimal near-field beam based on the received signals of the far-field wide beam.Finally, the authors of [88] developed a two-stage hierarchical beam training method.In the first stage, a coarse direction is obtained through conventional far-field beam training.In this second stage, based on the obtained coarse direction, joint angle-and-distance estimation is carried out over a fine grid in the polar domain.• Wideband systems: For wideband beam training, the near-field beam split effect has to be considered.Unlike in the data transmission stage, where the beam split effect leads to a performance loss, it has the potential to speed up beam training as a single beam can cover several directions or locations at different OFDM subcarriers.When the TTD-based hybrid beamforming architecture is employed, flexible control of near-field beam split can be achieved by properly designing the time delays.Therefore, the size of the codebook and the complexity of the training process can be significantly reduced.To take advantage of this, a fast wideband near-field beam training method was proposed in [58].This method is based on an antenna architecture that only utilizes TTDs to realize the analog beamforming.However, how to design the wideband near-field beam training for the hybrid beamforming architecture employing both TTDs and PSs is still an open research problem. Finally, all the aforementioned beam training methods are based on SPD antennas.For SPD antennas, the analog beamformer is generally subject to a unit-norm constraint.Therefore, the codebooks can be designed based on nearfield array response vectors.On the other hand, for CAP antennas realized by metasurfaces, the complex constraints on the analog beamformers, c.f., ( 93)- (95), makes the design of near-field beam training challenging, and more research is needed. F. Discussion and Open Research Problems We have introduced several fundamental antenna architectures for near-field beamfocusing and the basic principles of near-field beam training.However, to fully realize the advantages of near-field beamfocusing and beam training in practice, it is essential to overcome several key challenges.In the following, we discuss some of the primary open research problems that require attention: • Near-field channel estimation: Accurate channel estimation plays a key role in guaranteeing the performance of near-field beamfocusing.In conventional farfield communication, channel estimation techniques often rely on the sparsity of the channels in the angular domain.However, for near-field channels, the sparsity in the angular domain no longer holds.Therefore, it is important to unveil the sparsity of near-field channels in an appropriate domain for channel estimation.This task can be challenging, especially considering the additional distance dimension of near-field channels.• Near-field beamfocusing with finite-resolution DAC/ADC: An alternative approach to reducing the complexity of hybrid beamforming is to adopt finiteresolution digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) [46].In this case, the power consumption of the RF chains can be reduced substantially.However, finite-resolution DAC/ADC may result in different design challenges, such as limited signal constellations.How to achieve near-field beamfocusing in this case is an open problem.• Multi-functional near-field beamfocusing: Nextgeneration wireless networks are envisioned to transcend the communication-only paradigm, incorporating a range of additional functionalities such as computing, sensing, secure transmission, and wireless power transfer.However, integrating such diverse functions poses a challenge as different near-field beamfocusing designs may be required to optimize performance for each functionality.Addressing this challenge is essential for unlocking the full potential of NFC in next-generation wireless networks and enabling the seamless integration of diverse functionalities.• Hybrid-field beamforming and beam training: In practical scenarios, it is highly probable that users are situated in different field regions of the BS, resulting in a combination of near-field and far-field channels.However, the design of hybrid-field beamforming and beam training techniques accounting for the different channel characteristics of the near-field and far-field regions remains an open research challenge.• Dynamic switching between near-field and far-field communications: Whether a user is located in the nearfield or far-field is determined by the aperture of the employed antenna arrays and the frequency band.This classification is generally fixed in the current system designs, which poses a challenge for dynamic utilization of the benefits offered by near-field and far-field communications, namely high communication rate and low complexity, respectively.To overcome this challenge, it is imperative to develop new strategies that allow dynamic switching between these regions, thus enabling the exploitation of the full potential of near-field and farfield communications. IV. NEAR-FIELD PERFORMANCE ANALYSIS In this section, we provide a comprehensive performance analysis of NFC based on the near-field channel models discussed in Section II.We commence with the performance analysis for basic free-space LoS propagation and then shift our attention to statistical multipath channel models.For LoS channels, the received SNR and the corresponding power scaling law are analyzed for both SPD and CAP antennas.The derived results contribute to a deeper understanding of the performance limits of NFC.For multipath channels, we provide a general analytical framework for three typical performance metrics, namely the outage probability (OP), ergodic channel capacity (ECC), and ergodic mutual information (EMI).A. Performance Analysis for LoS Near-Field Channels 1) System Model: As illustrated in Fig. 22, we consider a typical near-field MISO channel, where the BS is equipped with an UPA5 containing N ≫ 1 antenna elements and the user is a single-antenna device.The UPA is placed on the x-z plane and centered at the origin.Here, N = N x N z , where N x and N z denote the number of antenna elements along the xand z-axes, respectively.For the sake of brevity, we assume that N x and N z are both odd numbers with N x = 2 Ñx + 1 and N z = 2 Ñz + 1.The physical dimensions of each BS antenna element along the xand z-axes are given by As a result, the physical dimensions of the UPA along the xand z-axes are L x ≈ N x d and L z ≈ N z d, respectively.Let e a ∈ (0, 1] denote the aperture efficiency of each antenna element.An antenna's aperture efficiency is defined as the ratio between the antenna's effective aperture and the antenna's physical aperture.By definition, the effective antenna aperture of each antenna element is given by A e = Ae a .The aperture efficiency is a dimensionless parameter between 0 and 1 that measures how close the antenna comes to using all the radio wave power intersecting its physical aperture [39].If the aperture efficiency were 100%, then all the wave's power falling on the antenna's physical aperture would be converted to electrical power delivered to the load attached to its output terminals.For the hypothetical isotropic antenna element, we have A = A e = λ 2 4π [39].As for the user, let r denote its distance from the center of the antenna array, and θ ∈ [0, π] and ϕ ∈ [0, π] denote the azimuth and elevation angles, respectively, cf.Fig. 22.Consequently, the user location can be expressed as r = [rΦ, rΨ, rΩ] T , where Φ ≜ sin ϕ cos θ, Ψ ≜ sin ϕ sin θ, and Ω ≜ cos ϕ.Moreover, we assume that the user is equipped with a hypothetical isotropic antenna element to receive the incoming signals and the receiving-mode polarization vector is fixed to be ρ ∈ C 3×1 . We next evaluate the performance of the considered MISO near-field channel by analyzing the received SNR at the user.Specifically, the SNR achieved by SPD antennas will be examined under the USW, NUSW, and the general near-field channel models.Subsequently, the SNR achieved by CAP antennas will be investigated. 2) SNR Analysis for SPD Antennas: When the BS is equipped with SPD antennas, the received signal at the user can be expressed as follows: where s ∈ C is the transmitted data symbol with zero mean and unit variance, h ∈ C N ×1 is the channel vector between the user and the BS, p is the transmit power, w is the beamforming vector with ∥w∥ 2 = 1, and n ∈ CN (0, σ 2 ) is AWGN.We assume that the BS has perfect knowledge of h and exploits maximum ratio transmission (MRT) beamforming to maximize the received SNR of the user, i.e., w is given by Therefore, the received SNR is given as follows: For LoS channels, the channel gain ∥h∥ 2 can be expressed as follows [39], [41]: where h i m,n (r) denotes the effective channel coefficient from the (m, n)-th BS antenna element to the user for i ∈ {U, N, G}.Section II provides analytical expressions to determine |h i m,n (r)| 2 for several near-field LoS channel models including the USW model (i = U, Eq. ( 35)), the NUSW model (i = N, Eq. ( 36)), and the introduced general model (i = G, Eq. ( 40)).For each LoS model, we derive a closedform expression for the received SNR, based on which the power scaling law in terms of the number of antennas, N , can be unveiled. • USW Channel Model: For the USW model (i = U), h U m,n (r) 2 can be derived from (35), and the received SNR is provided in the following theorem. Theorem 1.The received SNR for the USW model is where β 0 = Ae a 1 4πr 2 G 1 (0, r)G 2 (0, r).Proof.Please refer to Appendix D. ■ Remark 4. Considering the terms appearing in β 0 , G 1 (0, r) and G 2 (0, r) model the effective aperture loss and the polarization loss, respectively, whose expressions are given by ( 37) and ( 38), respectively.Moreover, Aea 4πr 2 models the free-space path loss.As mentioned in Section II, the near-field channel gain is mainly determined by the free-space path loss, and thus the terms G 1 (•, •) and G 2 (•, •) are not included in (35) for the sake of brevity.As outlined in Appendix D, under the USW channel model, the powers radiated by different transmit antenna elements are affected by identical polarization mismatches, projected antenna apertures, and free-space path losses.In order to provide a general expression for the received SNR, we have kept the terms related to the effective aperture loss and the polarization loss in Theorem 1. Next, by letting N x , N z → ∞, i.e., N = N x N z → ∞, the asymptotic SNR for the USW channel model is given in the following corollary. Remark 5.The result in (112) suggests that for the USW model, the received SNR increases linearly with the total number of transmit antenna elements.In other words, by increasing the number of antenna elements, it is possible to increase the link gain to any desired level, which may even exceed the transmit power.This thereby breaks the law of conservation of energy.The reason for this behaviour is that when N tends to infinity, the uniform amplitude assumption inherent to the USW model cannot capture the exact physical properties of near-field propagation. • NUSW Channel Model: For the NUSW model (i = N), h N m,n (r) 2 can be derived from (36), and the received SNR is provided in the following theorem. Proof.Please refer to Appendix E. ■ Remark 6.Similar to (111), G 1 (0, r) and G 2 (0, r) in ( 113) model the effective aperture loss and the polarization loss, respectively.For the sake of brevity, these two terms are not included in (36).As explained in Appendix E, under the NUSW channel model, the powers radiated by different transmit antenna elements are affected by the same polarization mismatches and have the same projected antenna apertures.In order to provide a general expression for the received SNR, we have kept the terms related to the effective aperture loss and the polarization loss in Theorem 2. By letting N x , N z → ∞, the asymptotic SNR for the NUSW model is obtained as follows. Corollary 2. As N x , N z → ∞ (N → ∞), the asymptotic SNR for the NUSW model satisfies Proof.Please refer to Appendix F. ■ Remark 7. The result in (114) suggests that by taking into account the non-uniform amplitude, the received SNR for the NUSW model no longer scales linearly with N , as was the case for the USW model.Instead, it scales only logarithmically with N .However, such a logarithmic scaling law still breaks the law of conservation of energy when N → ∞.The reason for this is that when N tends to infinity, the variations of the projected apertures and polarization losses across the array elements cannot be neglected, which is not considered in the NUSW model. • The General Channel Model: Under the general model 2 can be derived from (40), and the received SNR is given in the following theorem. Theorem 3. The received SNR for the general model is given by Proof.Please refer to Appendix G. ■ Although (115) can be utilized to calculate the SNR, deriving the power scaling law based on this expression is a challenging task.Thus, for mathematical tractability and to gain insights for system design, we consider a simplified case when the receiving-mode polarization vector at the user and the electric current induced in the UPA are both in x direction, i.e., ρ = Ĵ(s m,n ) = [1, 0, 0] T , ∀m, n.In this case, the SNR can be approximated as shown in the following corollary. Proof. Please refer to Appendix H. ■ We further investigate the asymptotic SNR when N x , N z → ∞, which is given in the following corollary. Proof.This corollary can be directly obtained by using the following results for x ∈ X 1 and z ∈ Z 1 .■ Recalling the fact that d ≥ √ A and e a ∈ (0, 1] yields and thus lim M →∞ γ General < p σ 2 .Remark 8.The results in (117) suggest that for the general channel model, the asymptotic SNR approaches a constant value pAea 3σ 2 d 2 as the UPA grows in size rather than increasing unboundedly as for the USW and NUSW channel models.Eq. ( 120) further shows that even with an infinitely large array, only at most 1 3 of the total transmitted power can be received by the user, validating that γ General satisfies the law of conservation of energy when N → ∞.An intuitive explanation for why the received power is only one-third of the transmitted power, even though the UPA is infinitely large, is that each newly added BS antenna is deployed further away from the user, which reduces the effective area and increases the polarization loss [41]. Remark 9.The array occupation ratio is defined as ζ ≜ A d 2 ∈ (0, 1] and measures the fraction of the total UPA area that is occupied by array elements [39].The result in (117) suggests that the asymptotic SNR increases linearly with ζ, because more effective antenna area is available to radiate the signal power for a larger ζ.As stated before, an SPD UPA becomes a CAP surface when ζ = 1.In this case, the asymptotic SNR is given by pea 3σ 2 , which will be further validated in Section IV-A3. • Numerical Results: To further verify our results, we show the SNRs obtained for the different channel models versus N in Fig. 23.Particularly, γ USW , γ NUSW , γ General (exact), and γ General (approximation) are obtained with (111), (113), (115), and (116), respectively.The asymptotic SNR limit given in ( 117) is also included.As can be observed, for small and moderate N , the SNRs obtained for all models increase linearly with N .This is because, in this case, the user is located in the far field, where all considered models are accurate.However, for a sufficiently large N , the projected antenna apertures and polarization losses vary across the transmit antenna array.In this case, γ USW and γ NUSW violate the law of conservation of energy and approach infinity.By contrast, Fig. 23 confirms that when N → ∞, the received SNR obtained for the general model, i.e., γ General , approaches a constant, i.e., the SNR limit, and obeys the law of conservation of energy.Furthermore, Fig. 23 shows that N = 10 6 antennas are needed before the difference between γ General and γ USW (or γ NUSW ) becomes noticeable.In this case, the array size is 5.35 m×5.35 m which is a realistic size for future conformal arrays, e.g., deployed on facades of buildings.In conclusion, although the USW and NUSW models are applicable in some NFC application scenarios, they are not suitable for studying the asymptotic performance in the limit of large N . 3) SNR Analysis for CAP Antennas: The above results obtained for SPD antennas can be directly extended to the case of CAP antennas.In the sequel, we assume that the UPA illustrated in Fig. 22 is a CAP surface of size L x × L z , which is placed on the x-z plane and centered at the origin.The signal received by the user is given by where h CAP (0, r) ∈ C is the effective channel between the transmit CAP surface and the user.Therefore, the received SNR is given by where |h CAP (0, r)| 2 is the effective power gain.On the basis of ( 122), the received SNR for the USW, NUSW, and general channel models for CAP antennas can be derived, based on which the power scaling law in terms of the surface size, S CAP ≜ L x L z , can be unveiled. • USW Channel Model: We commence with the USW model for CAP antennas, for which the received SNR can be calculated as follows. Theorem 4. The received SNR for the USW model for CAP antennas is given by where ( Remark 10.The result in (124) reveals that, for the USW model, the received SNR for CAP antennas scales linearly with the transmit surface area, which leads to the violation of the law of conservation of energy when S CAP → ∞.The reason for this lies in the fact that when S CAP approaches infinity, the uniform amplitude assumed in the USW model cannot capture the exact physical properties of near-field propagation. • NUSW Channel Model: For the NUSW model for CAP antennas, we have the following result. Theorem 5.The received SNR for the NUSW model for CAP antennas is given by where Proof.The proof resembles the proofs of Theorem 2 and Theorem 4. ■ By setting L x , L z → ∞, the asymptotic SNR for the NUSW model for CAP antennas is obtained and given in the following corollary. Corollary 6.As L x , L z → ∞ (S CAP → ∞), for CAP antennas, the asymptotic SNR for the NUSW model satisfies Proof.The proof resembles the proof of Corollary 2. ■ Remark 11.The result in (126) reveals that by taking into account non-uniform amplitudes, the received SNR for the NUSW model increases logarithmically with S CAP , which differs from the linear scaling law obtained for the USW model.However, since the impact of varying projected apertures and polarization losses is not considered across the CAP, γ CAP NUSW can exceed p σ 2 when S CAP → ∞, thereby violating the law of energy conservation. • The General Channel Model: The SNR expression for the general channel model for CAP antennas is provided in the following theorem.Theorem 6.For CAP antennas, the received SNR for the general channel model is given by Proof.The proof resembles the proofs of Theorem 3 and Theorem 4. ■ Deriving the power scaling law based on ( 127) is a challenging task.Therefore, for convenience, in the following corollary, we consider the special case of ρ = Ĵ([x, 0, z] T ) = [1, 0, 0] T , ∀x, z, based on which some interesting conclusions can be drawn. Corollary 7. When ρ = Ĵ([x, 0, z] T ) = [1, 0, 0] T , ∀x, z, the received SNR for the general channel model for CAP antennas satisfies where Proof.The proof resembles the proofs of Corollary 3 and Corollary 4. ■ Recalling that e a ∈ (0, 1], we obtain and thus lim SCAP→∞ γ CAP General < p σ 2 .Remark 12.The results in ( 129) and (130) suggest that γ CAP General satisfies the law of conservation of energy when S CAP → ∞.Note that ( 129) can be also obtained from (117) by setting the inter-element distance This is expected since a CAP surface is equivalent to an SPD UPA, which is fully covered by array elements. Numerical Results: To further verify our results, we show the SNRs obtained for the considered channel models versus S CAP in Fig. 24.Particularly, γ CAP USW , γ CAP NUSW , and γ CAP General are calculated based on ( 123), (125), and (128), respectively.The asymptotic SNR limit given in ( 129) is also included as a baseline.As can be observed from Fig. 24, for small and moderate S CAP , the SNRs obtained for all considered channel models increase linearly with S CAP .This is because the user is located in the far field, where all considered models are accurate.However, since the impact of the varying projected apertures and polarization losses is ignored, the asymptotic values of γ CAP USW and γ CAP NUSW exceed the transmit SNR p σ 2 , therefore breaking the law of conservation of energy.Our observations from Figs. 23 and Fig. 24 highlight the importance of correctly modelling the variations of the free-space path losses, projected apertures, and polarization losses across the antenna array elements and CAP surface, respectively, when studying the asymptotic limits of the SNR. 4) Summary of the Analytical Results: In Table V, we summarize our analytical results for the SNR and scaling law.In the third column of the table, the notation O( 1) is used to indicate that the asymptotic SNR is a constant.Although this subsection has focused on MISO transmission, the developed results can be extended to MIMO and multiuser systems.Following a similar approach as for obtaining the results in Table V, the SINR and power scaling law for MIMO and multiuser transmission can be derived. B. Performance Analysis for Statistical Multipath Near-Field Channels Having analyzed the performance for the LoS near-field channel, we now shift our attention to statistical multipath near-field channels, as depicted in Fig. 25. 1) Channel Statistics: If both LoS and NLOS components are present, the multipath channel coefficients can be modelled as in (11): where h ≜ βa(r) is the LoS component and hℓ ≜ βℓ a(r ℓ ) is the NLoS component generated by the ℓ-th scatterer.By referring to Fig. 25, we rewrite the NLoS component hℓ as follows: where h ℓ (r ℓ ) ∈ C N ×1 is the channel vector between the ℓ-th scatterer and the BS, h ℓ (r ℓ , r) ∈ C is the channel coefficient between the user and the ℓ-th scatterer, and α ℓ models the complex random reflection coefficient of the ℓ-th scatterer.The complex gains {α ℓ } L ℓ=1 are generally modelled as independently complex Gaussian distributed variables with , where σ 2 ℓ represents the intensity attenuation caused by the ℓ-th scatterer [89], [90].On this basis, the multipath channel can be modelled as follows where h = E{h} denotes the channel mean and is the correlation matrix.Eqn. ( 133) corresponds to the Rician fading model.As shown in Fig. 25, the scatterers are located in the near field of the BS [91].As a result, the LoS channels h, {h ℓ (r ℓ , r)} L ℓ=1 , and {h ℓ (r ℓ )} L ℓ=1 can be described by the near-field LoS models presented in Section II.Different LoS channel models yield different correlation matrices R, but the statistics of the resulting multipath MISO channel always follow (133). NFC generally occurs in mmWave and sub-THz bands; therefore, the resulting channels are sparsely-scattered (N ≫ L) and dominated by LoS propagation, i.e., ∥h∥ 2 ≫ L ℓ=1 σ 2 ℓ |h ℓ (r ℓ , r)| 2 ∥h ℓ (r ℓ )∥ 2 .Consequently, matrix R is generally rank-deficient.In practical wireless propagation environments, the LoS path might be blocked by obstacles.In this case, the mean of h equals zero, and (133) degrades to a Rayleigh fading channel model with R = E{hh H }. Although the NFC channel is generally dominated by its LoS component, considering the scenario with the LoS path blocked is also of interest and has theoretical significance as a limiting case.Hence, besides the Rician distribution, the Rayleigh distribution has also become one of the commonly accepted models for NFC channel modeling; see [91]- [94] and the references therein.The statistical model in (133) is not only applicable for SPD antennas but also for CAP antennas if the CAP surface is sampled into an SPD antenna array [92]- [94].For CAP surfaces without spatial sampling, the modelling of the resulting multipath channels is an open problem.Hence, we will focus our efforts on SPD antennas. Near-Field Channel Statistics vs. Far-Field Channel Statistics: In NFC, different antenna elements of the same array suffer from significantly different path lengths and nonuniform channel gains.Due to this property, correlation matrix R cannot be simplified to an identity matrix even when the antenna array is half-wavelength-spaced and deployed in an isotropic scattering environment [91].This fact means that correlated fading models are physically more realistic for NFC.In other words, adopting a correlated fading model where R ̸ = I to evaluate NFC performance is required.On the other hand, for performance analysis of conventional farfield communications, usually the independent and identically distributed (i.i.d.) Rayleigh fading channel model with R = I has been adopted in most theoretical research. Considering the above facts, we adopt the correlated fading model for analyzing the statistical near-field performance in the presence of NLoS channels.In this case, channel vector h can be statistically described as follows where h ∼ CN (0, I).We next evaluate NFC performance for this statistical channel model by analyzing the OP, ECC, and EMI.For ease of understanding, we first analyze these metrics for correlated MISO Rayleigh channels with h = 0 and then extend the derived results to correlated MISO Rician channels with h ̸ = 0. Our aim is to provide a general analytical framework for analyzing the three metrics, i.e., OP, ECC, and EMI, without knowledge of the structure of correlation matrix R. In other words, our proposed analytical framework is applicable for any R ⪰ 0. 2) Analysis of the OP for Rayleigh Channels: Let R denote the required communication rate.An outage occurs when the actual communication rate log 2 (1 + γ) is less than R. Accordingly, the OP for Rayleigh channels can be written as follows: Inserting ( 109) into (136) yields (137) where F ∥h∥ 2 (•) is the cumulative distribution function (CDF) of ∥h∥ 2 .The OP can be analyzed using the following basic analytical framework, which involves three steps. • Step 1 -Analyzing the Statistics of the Channel Gain: In the first step, we calculate the probability density function (PDF) and CDF of ∥h∥ 2 to obtain a closed-form expression for P. The main results are summarized as follows. Lemma 4. For correlated MISO Rayleigh channel h ∼ CN (0, R), the PDF and CDF of ∥h∥ 2 are respectively given by where r R is the rank of matrix R, {λ i > 0} r R i=1 are the positive eigenvalues of matrix R, dt is the lower incomplete Gamma function, ψ 0 = 1, and the ψ k (k ≥ 1) can be calculated recursively as follows Proof.Please refer to Appendix J. ■ • Step 2 -Deriving a Closed-Form Expression of the OP: In the second step, we exploit the CDF of ∥h∥ 2 to calculate the OP, which yields the following theorem. Theorem 7. The OP of the considered system is given by Proof.This theorem can be directly proved by substituting (139) into (137).■ • Step 3 -Deriving a High-SNR Approximation of the OP: In the last step, we investigate the asymptotic behaviour of the OP in the high-SNR regime, i.e., p → ∞, in order to provide more insights for system design.The main results are summarized in the following corollary. Corollary 8.The asymptotic OP in the high-SNR regime can be expressed in the following form: where and Proof.Please refer to Appendix K. ■ In (142), G a is referred to as the array gain, and G d is referred to as the diversity gain or diversity order [95].The diversity order G d determines the slope of the OP as a function of the transmit power, at high SNR, depicted in a log-log scale.On the other hand, G a (in decibels) specifies the power gain of the actual OP compared to a benchmark-OP of p −G d .Note that the OP can be improved by increasing G a or G d . Remark 13.The results in Corollary 8 indicate that in the high-SNR regime, the slope and power gain of the OP is given by r R and , respectively. • Numerical Results: To illustrate the above derivations, we show the OP as a function of the transmit power, p, in Fig. 26 for the USW channel model and various values of L. The simulation results are denoted by markers.The analytical and asymptotic results are calculated using (141) and (142), respectively.Fig. 26 reveals that the analytical results are in good agreement with the simulated results, and the derived asymptotic results approach the numerical results in the high-SNR regime (p → ∞).Furthermore, it can be observed that larger values of L yield higher diversity orders. • Extension to MIMO Case: Note that the definition presented in (136) only applies to MISO channels.The extension to the single-user MIMO case with isotropic inputs is given by P = Pr(log where H ∈ C N R ×N T is the channel matrix with N R and N T denoting the numbers of receive and transmit antennas, respectively.The evaluation of the OP in (143) requires the application of tools from random matrix theory; please refer to [96]- [98] and the references therein for more details.The existing literature shows that the asymptotic OP for MIMO channels in the high-SNR regime also follows the standard form given in (142) (see, e.g., [99]). 3) Analysis of the ECC for Rayleigh Channels: Having analyzed the OP, we turn our attention to the ECC.Achieving the ECC requires the BS to adaptively adjust its coding rate to the channel capacity at the beginning of each coherence interval.The ECC mathematically equals the mean of the instantaneous channel capacity log 2 (1+p/σ 2 ∥h∥ 2 ), which can be expressed as follows: The analysis of the ECC also comprises three steps. • Step 1 -Analyzing the Statistics of the Channel Gain: In the first step, we analyze the statistics of the channel gain ∥h∥ 2 .The results are provided in Lemma 4. • Step 2 -Deriving a Closed-Form Expression for the ECC: In the second step, we exploit the PDF of ∥h∥ 2 to derive a closed-form expression for Crayleigh , which yields the following theorem. Theorem 8.The ECC can be expressed in closed form as follows: where Ei (x) = − ∞ −x e −t t −1 dt denotes the exponential integral function. Proof.This theorem is proved by substituting (138) into (144) and solving the resulting integral with the aid of [100, Eq. (4.337.5)].■ • Step 3 -Deriving a High-SNR Approximation for the ECC: In the third step, we perform asymptotic analysis for the ECC assuming a sufficiently high SNR (p → ∞) in order to provide additional insights for system design.The asymptotic ECC in the high-SNR regime is presented in the following corollary. Corollary 9.The asymptotic ECC in the high-SNR regime can be expressed in the following form: where S ∞ = 1 and where ψ (x) = d dx ln Γ (x) is the Digamma function.Proof.Please refer to Appendix L. ■ It is worth noting that log 2 (p) in (146) can be rewritten as log 2 (p) = 10 log 10 (p) 10 log 10 2 = p| dB 3 dB .Hence, L ∞ is termed the high-SNR power offset in 3-dB units [101], and S ∞ is referred to as the high-SNR slope, the number of DoFs, the maximum multiplexing gain, or the pre-log factor in bits/s/Hz/(3 dB).The high-SNR slope S ∞ characterizes the ECC as a function of the transmit power, at high SNR, on a log scale.In multi-antenna communications, S ∞ quantifies the number of spatial DoFs, which determines the number of spatial dimensions available for communications.On the other hand, L ∞ is the power shift with respect to the baseline ECC curve of S ∞ log 2 (p).Note that the ECC can be improved by increasing S ∞ or decreasing L ∞ . Remark 14.The results in Corollary 9 reveal that the high-SNR slope and the high-SNR power offset of the ECC are given by 1 and −E{log 2 (∥h/σ 2 ∥ 2 )}, respectively. • Numerical Results: To illustrate the above results, we show the ECC versus the transmit power, p, in Fig. 27 for the USW channel model and various values of L. The analytical The system is operating at 28 GHz.and asymptotic results are calculated using ( 145) and (146), respectively.Simulation results are plotted using markers.As Fig. 27 shows, the analytical results are in excellent agreement with the simulated results, and the derived asymptotic results approach the numerical results in the high-SNR regime (p → ∞).For the considered case, the high-SNR slope is independent of L, which causes the ECC curves in Fig. 27 to be parallel to each other for high transmit powers. • Extension to MIMO Case: The definition presented in (144) applies to MISO systems.The extension to single-user MIMO systems with isotropic inputs is given by The evaluation of the ECC in (148) requires the application of random matrix theory; see [96]- [98], [102] for more details.Furthermore, the existing literature has shown that the asymptotic ECC for MIMO channels in the high-SNR regime can be also expressed in the standard form given in (146) (see, e.g., [101]).Besides the ECC and OP, the diversity-multiplexing tradeoff (DMT) is another important performance metric for statistical NFC MIMO channels.Based on the system model sketched in Fig. 25, the statistical MIMO channel under Rayleigh fading can be characterized as where A r and A r are two deterministic matrices containing the array steering vectors, and Λ H is a diagonal random matrix with complex Gaussian distributed diagonal elements.Since A r Λ H A t corresponds to a finite-dimensional channel model [103], conventional random matrix theory tools are difficult to apply.Therefore, a theoretical analysis of the DMT for this channel is an open problem, which deserves further research attention. 4) Analysis of the EMI for Rayleigh Channels: For our analysis of the ECC, we have utilized Shannon's formula to calculate the channel capacity, i.e., log 2 (1 + γ).From the information-theoretic point of view, the channel capacity log 2 (1 + γ) is achievable when the transmitter sends Gaussian distributed source signals [104].However, although Gaussian signals are capacity-achieving, practical systems transmit signals belong to finite and discrete constellations, such as quadrature amplitude modulation (QAM) [20], [105]- [107].The analysis of the mutual information (MI) for finite-alphabet input signals (finite-alphabet inputs) is very different from that for Gaussian distributed input signals (Gaussian inputs).Motivated by this, we establish the basic analytical framework for the EMI achieved by finite-alphabet inputs for correlated fading channels. The EMI for fading channels is best understood by considering the MI of a scalar Gaussian channel with finite-alphabet inputs.To this end, consider the scalar AWGN channel where Z ∼ CN (0, 1) is the AWGN, γ is the SNR, and X ∈ C is the transmitted symbol.We assume that X satisfies the power constraint E{|X| 2 } = 1 and is taken from a finite constellation alphabet X consisting of Q points, i.e., X = {x q } Q q=1 .The qth symbol in X , i.e, x q , is transmitted with probability p q , 0 < p q < 1, and the vector of probabilities For this AWGN channel, the MI is given by [20] where is the entropy of the input distribution p X in bits.When X is a Gaussian constellation, then I X (γ) degenerates to I X (γ) = log 2 (1 + γ).In contrast to Shannon's formula, the MI in (150) generally cannot be simplified to a closed-form expression, which complicates the subsequent analysis. By a straightforward extension of (150) to a single-input vector channel, the EMI achieved in the considered MISO Rayleigh channel can be expressed as follows [108]: To characterize the EMI, we follow three main steps which are detailed in the sequel. • Step 1 -Analyzing the Statistics of the Channel Gain: Similar to the analyses of the OP and the ECC, the statistics of ∥h∥ 2 are needed.The results are given in (138) and (139). • Step 2 -Deriving a Closed-Form Expression for the EMI: In the second step, we leverage the PDF of ∥h∥ 2 to derive a closed-form expression for the EMI. As stated before, there is no closed-form expression for the MI, which makes the derivation of ĪX a challenging task.As a compromise, we resort to developing accurate approximations of ĪX .As suggested in [108]- [111], by using multi-exponential decay curve fitting (M-EDCF), the MI given in (139) can be approximated as follows: can be found by using the open-source fitting software package 1stOpt6 [109].Table VI lists the fitting parameters of the most commonly used equiprobable square QAM constellations, i.e., 4-QAM (or quadrature phase shift keying, QPSK), 16-QAM, 64-QAM, and 256-QAM.Based on the data in Table VI and the discussions in [109], using ÎX (•) to approximate I X (•) yields an absolute error of O(10 −4 ) and a relative error of O(10 −3 ).Considering this excellent approximation quality, it is reasonable to employ ÎX (•) on approximating the EMI, which yields the following theorem. Theorem 9.The EMI achieved by finite-alphabet inputs can be approximated as follows: Proof.This theorem can be proved by substituting (138) and ( 152) into (151) and calculating the resulting integral with the aid of [100, Eq. (3.326.2)].■ Given the closed-form expression of the EMI, the energy efficiency (EE), i.e., the total energy consumption per bit (in bit/Joule), can also be obtained as follows: where P tot denotes the total consumed power (in Watt), W is the bandwidth (in Hz), and EMI represents the spectral efficiency (SE) or EMI (in bit/s/Hz).Particularly, we have EMI = C for Gaussian inputs and EMI = ĪX for finitealphabet inputs.According to the circuit power consumption model in [112], [113], the total consumed power is calculated as follows: where ζ eff < 1 is the efficiency of the power amplifier, p denotes the transmit power, and P circ is given as follows: with P syn , P TR , and P RR representing the circuit power consumptions of the frequency synthesizer, the transmit RF chain, and the receive RF chain, respectively.Inserting (155) and ( 156) into (154) yields • Step 3 -Deriving a High-SNR Approximation for the EMI: In the last step, we investigate the asymptotic behaviour of the EMI in the high-SNR regime, i.e., p → ∞.The main results are summarized as follows. Corollary 10.The asymptotic EMI in the high-SNR regime can be expressed as follows: where Here, MMSE X (t) denotes the minimum mean square error (MMSE) in estimating X in (149) from Y when γ = t, and M [ϱ (t) ; z] ≜ ∞ 0 t z−1 ϱ (t) dt denotes the Mellin transform of ϱ (t) [114]. Proof.Please refer to Appendix M. ■ Remark 15.The result in (158) suggests that the EMI achieved by finite-alphabet inputs converges to H p X in the limit of large p, and its rate of convergence (ROC) equals the rate of (A a • p) −A d converging to 0. In (158), A a is referred to as the array gain, and A d is referred to as the diversity order.The diversity order A d determines the slope of the ROC, i.e., (A a γ) −A d , as a function of the transmit power, at high SNR, in a log-log scale.On the other hand, A a (in decibels) determines the power gain relative to a benchmark ROC curve of (γ) −A d .It is noteworthy that the ROC is accelerated by increasing A a or A d , and a faster ROC yields a larger EMI. Remark 16.The results in Corollary 10 reveal that the diversity order and the array gain of the EMI are given by r R and 1 Remark 17.By comparing (158) with (146), one can see a significant difference between the EMI achieved by finitealphabet inputs and Gaussian inputs.The EMI achieved by Gaussian inputs, also referred to as the ECC, grows with γ unboundedly.In contrast, the EMI achieved with finitealphabet inputs converges to a constant value. • Numerical Results: To further illustrate our derived results, in Fig. 28, we plot the EMI for USW LoS channels achieved by square QAM constellations versus the transmit power, p.The simulation results are denoted by markers.The approximated results are obtained based on (153).As can be observed in Fig. 28, the approximated results are in excellent agreement with the simulation results.This verifies the accuracy of the approximation in (152) and (153).The EMI achieved by Gaussian inputs is also shown as a baseline.As Fig. 28 shows, the EMI for Gaussian inputs grows unbound-edly as p increases, whereas the EMI for finite-alphabet inputs converges to the entropy of the input distribution H p X , in the limit of large p.This observation validates our discussions in Remark 17.Moreover, we observe that in the low-SNR regime, the EMI achieved by finite-alphabet inputs is close to that achieved by Gaussian inputs, which is consistent with the results in [115].To illustrate the ROC of the EMI, we depict H p X − ĪX versus p in Fig. 29.As can be observed, in the high-SNR regime, the derived asymptotic results approach the numerical results.Besides, it can be observed that lower modulation orders yield faster ROCs.The numerical results presented in Fig. 28 and Fig. 29 are based on square M -QAM constellations, which are the most widely used modulation schemes in practical communication systems, supported, e.g., in 5G new radio (NR) [116].For a thorough study, we compare the EMI achieved by QAM and phase shift keying (PSK) modulation in Fig. 30.PSK is used in wireless local area network applications such as Bluetooth and radio frequency identification (RFID).As observed from Fig. 30, for a given modulation order M > 4, PSK's EMI is smaller than QAM's.This means that PSK has a lower spectral efficiency than QAM.Moreover, PSK has an inferior BER performance compared to QAM at high SNRs, as the phase transitions become more difficult to detect.PSK may also require more complex phase synchronization and demodulation techniques, especially for higher-order PSK [117].The above arguments imply that QAM is preferred over PSK for application in future NFC networks.Fig. 31 illustrates the EE-SE tradeoff obtained by (157) when the transmit power budget p varies from 0 dBm to 50 dBm.It can be observed that for increasing EMI or SE, the EE first increases and then decreases.This suggests that there exists an optimal EMI that maximizes the EE, which is denoted as EMI ⋆ .The reason for this behaviour is that the total consumed power P tot increases linearly with the power budget p, whereas the EMI and SE increase sub-linearly with p. Particularly, for finite-alphabet inputs, the achieved EMI converges to some constant as p increases.Therefore, the corresponding EE decreases rapidly when the EMI exceeds EMI ⋆ .Additionally, it can be seen that a larger modulation order yields a larger value of EMI ⋆ and a better SE-EE tradeoff.Gaussian input signals provide an EE performance upper bound.In the low-EMI or low-SNR regime, we observe that the EE achieved by finite-alphabet inputs is close to that achieved by Gaussian inputs, which is consistent with the results shown in Fig. 28.It, thus, is advisable to apply lowerorder modulation types in the low-power regime [118]. • Extension to MIMO Case: Compared with the EMI The system is operating at 28 GHz.achieved in MISO channels (as formulated in (151)), the analysis of the EMI achieved in MIMO channels is much more challenging.In a multiple-stream MIMO channel, the received signal vector is given by where n ∼ CN (0, I) is Gaussian noise, P ∈ C N T ×N D denotes the precoding matrix satisfying tr PP H = 1 with N D being the number of data streams, and x ∈ C N D ×1 is the data vector with i.i.d.elements drawn from constellation X .Hence, input signal x is taken from a multi-dimensional constellation , with E xx H = I.Assume x q is sent with probability q q , 0 < q q < 1, and the input distribution is given by q The MI in this case can 1].However, the values of P TR and P RR are calculated with the aid of [113,Eqn. (32)] and [113,Eqn. (33)], respectively.Since the computation is trivial, we omit the detailed steps here. be written as follows [20], [119], [120]: The EMI achieved in channel (159) is given by By comparing I X p/σ 2 ; HP in (160) with I X p/σ 2 in (150), we observe that I X (γ; HP) presents an even more intractable form than I X p/σ 2 .Therefore, our developed methodology for analyzing E{I X p/σ 2 } cannot be straightforwardly applied to analyzing E{I X p/σ 2 ; HP }.In the past years, the EMI achieved by finite-alphabet inputs in MIMO channels has been studied extensively and many approximated expressions for the EMI were derived under different fading models; see [105], [121], [122] and the references therein.However, it should be noted that the problem of characterizing the high-SNR asymptotic EMI for MIMO transmission, i.e., lim p→∞ E{I X p/σ 2 ; HP } has been open for years, and only a couple of works appeared recently.The author in [119] discussed the high-SNR asymptotic behaviour of E{I X (γ; HP)} for isotropic inputs and correlated Rician channels.The authors in [120] further characterized the high-SNR EMI by considering a double-scattering fading model and non-isotropic precoding.Most interestingly, as shown in these two works, the asymptotic EMI for MIMO channels in the high-SNR regime also follows the standard form given in (158). By now, we have established a framework for analyzing the OP, ECC, and EMI for the correlated MISO Rayleigh fading model.We next exploit this framework to analyze the NFC performance for correlated Rician fading. 5) Analysis of the OP for Rician Channels: Based on (136) and (137), the OP can be expressed as follows: which is analyzed in the following three steps. • Step 1 -Analyzing the Statistics of the Channel Gain: In the first step, we analyze the statistics of ∥h∥ 2 = ∥h+R 1 2 h∥ 2 to obtain a closed-form expression for P rician .We commence with the following lemma. Proof.Please refer to Appendix N. ■ Note that ã0 is a deterministic constant and the randomness of (163) originates from ã.Moreover, since [ h1 , . . ., hr R ] T ∼ CN (0, I), we have λ + hi ∥ 2 follows a noncentral chi-square distribution with 2 degrees of freedom and noncentrality parameter κ i = 2λ −1 i |h i | 2 [123].Consequently, ã can be expressed as a weighted sum of noncentral chi-square variates with 2 degrees of freedom, noncentrality parameter equaling 2λ −1 i |h i | 2 , and weight coefficients a i = λi 2 .On this basis, the PDF and CDF of ã are presented in the following lemmas.Lemma 6.The PDF of ã is given by where n (•) is the generalized Laguerre polynomial [100, Eq. (8.970.1)] and (x) n is the Pochhammer symbol [124]. Based on [125], we have The coefficients c k are recursively obtained using the following formulas: In (167), the parameters ξ 0 and ϖ are selected in a suitable manner to guarantee the uniform convergence of (164).More specifically, if ξ 0 < r R /2, then the series representation in (164) uniformly converges in any finite interval for all ϖ > 0. If ξ 0 ≥ r R /2, then (164) uniformly converges in any finite interval for ϖ > (2 − r R /ξ 0 )a max /2, where a max = max i=1,...,r R a i .In this paper, we set ξ 0 = r R /3 and ϖ Proof.Please refer to [126,Section 3].■ Lemma 7. The CDF of ã is given by The coefficients m k are recursively obtained using the following formulas: In (169), parameters ξ 0 and ϖ are selected in a similar manner as for the PDF to ensure the uniform convergence of (168). • Step 2 -Deriving a Closed-Form Expression for the OP: In the second step, we exploit the CDF of ∥h∥ 2 to calculate the OP, which yields the following theorem. Theorem 10.The OP of the considered system is given by Proof.This theorem can be directly proved by substituting (170) into (162).■ Remark 18.The result in (172) suggests that the OP for NFC Rician fading channels is a piecewise function of p, where the two pieces intersect at . This means that for MISO Rician channels, achieving zero OP only requires a finite transmit power.By contrast, for MISO Rayleigh channels, we have h = 0 and thus ã0 = 0, which means that achieving a zero OP requires an infinitely high transmit power.The above discussion reveals that the LoS component can improve the outage performance of NFC. • Step 3 -Deriving the Rate of Convergence of the OP: In Step 3 of Section IV-B2, we investigate the high-SNR asymptotic behaviour of P rayleigh and formulate it into the standard form (G a • p) −G d (given by ( 142)).Considering (172), P rician satisfies lim p→∞ P rician = 0.However, since P rician is a piecewise function of p, we cannot formulate its high-SNR approximation in the standard form as given by (142).It is worth noting that the standard form (G a • p) −G d essentially characterizes the rate of P rayleigh converging to zero.Motivated by this, in the last step, we investigate the rate of P rician converging to zero. Based on (172), as p increases, P rician becomes zero when p = p 0 = (2 where and Proof , the rate at which P rician converges to zero equals the rate of Based on (174), the diversity order and the array gain of P rician are given by r R and , respectively. Remark 20.When h = 0, the Rician model degenerates to the Rayleigh model as ã0 = 0.In this case, (174) degenerates into the standard form given by (142).By comparing (142) with (174), we find that G d = G d and G a < G a .This suggests that the OP for Rician fading yields the same diversity order as that for Rayleigh fading, yet has a larger array gain than the latter one. • Numerical Results: To further illustrate the derived results, in Fig. 32, we show the OP for USW LoS channels as a function of the transmit power, p.The analytical results are calculated using (172).As can be observed in Fig. 32, the analytical results are in good agreement with the simulated results, and the OP collapses to zero when p = (2 R −1)σ 2 ã0 .This verifies the correctness of Theorem 10.The simulation parameters used to generate Fig. 32 are the same as those used to generate Fig. 26.For this simulation setting, we have ã0 ≫ E{ã}, which means that the considered BS-to-user channel is LoS-dominated.By comparing the results in Fig. 32 and Fig. 26, we find that to achieve the same OP, the Rician fading channel requires much less power resources than the Rayleigh fading channel.This performance gain mainly originates from the strong LoS component for Rician fading.To illustrate the ROC of the OP, we depict P rician versus ae in Fig. 33.As can be observed, when , the derived asymptotic results approach the analytical results.Besides, it can be observed that a higher diversity order is achievable when the channel contains more scatterers.6) Analysis of the ECC for Rician Channels: Having analyzed the OP, we turn our attention to the ECC, which is given as follows: The analysis of the ECC also comprises three steps. • Step 1 -Analyzing the Statistics of the Channel Gain: In the first step, we analyze the statistics of the channel gain ∥h∥ 2 .The results can be found in Lemma 6. • Step 2 -Deriving a Closed-Form Expression for the ECC: In the second step, we exploit the CDF of ∥h∥ 2 to derive a closed-form expression for Crician , which leads to the following theorem.Theorem 11.The ECC can be expressed in closed form as follows: with a ≜ 2ϖp/(σ 2 + pã 0 ). Proof.This theorem is proved by substituting (171) into (175) and solving the resulting integral with the aid of [100, Eq. (4.337.5)].■ • Step 3 -Deriving a High-SNR Approximation for the ECC: In the third step, we perform an asymptotic analysis of the ECC for a sufficiently large transmit power, i.e., p → ∞ in order to obtain more insights for system design.The asymptotic ECC in the high-SNR regime is presented in the following corollary. Corollary 12.The asymptotic ECC in the high-SNR regime can be expressed in the following form: where S ∞ = 1 and Proof.The proof closely follows Corollary 9. ■ Remark 21.The results in Corollary 12 suggest that the high-SNR slope and the high-SNR power offset of Crician are given by 1 and −E{log 2 (∥h∥ 2 /σ 2 )}, respectively.By comparing (177) with (146), we find that S ∞ = S ∞ , which suggests that the ECC for Rician fading yields the same high-SNR slope as that for Rayleigh fading.However, providing a theoretical comparison between L ∞ and L ∞ is challenging.Thus, we will use numerical results to compare these two metrics. • Numerical Results: To illustrate the above results, in Fig. 34, we show the ECC for USW LoS channels versus the transmit power, p.As Fig. 34 shows, the analytical results are in excellent agreement with the simulated results, and the derived asymptotic results approach the analytical results in the high-SNR regime.The simulation parameters used to generate Fig. 34 are the same as those used to generate Fig. 27.By comparing the results in these two figures, we find that to achieve the same ECC, the Rician fading channel requires much less power than the Rayleigh fading channel.Since the ECCs achieved for these two types of fading have the same high-SNR slope, we conclude that the ECC for Rician fading yields a smaller high-SNR power offset than that for Rayleigh fading, i.e., L ∞ < L ∞ .This performance gain mainly originates from the strong LoS component for Rician fading. 7) Analysis of the EMI for Rician Channels: The EMI can be written as follows [108]: To characterize the EMI, we follow three main steps which are detailed in the sequel. • Step 1 -Analyzing the Statistics of the Channel Gain: Similar to the analyses of the OP and the ECC, we discuss the statistics of ∥h∥ 2 in the first step.The results are provided in (170) and (171). • Step 2 -Deriving a Closed-Form Expressions for the EMI: In the second step, we leverage the PDF of ∥h∥ 2 to derive a closed-form expression for the EMI.The main results are summarized as follows. Theorem 12.The EMI achieved by finite-alphabet inputs can be approximated as follows The system is operating at 28 GHz.• Step 3 -Deriving a High-SNR Approximation for the EMI: In the last step, we investigate the asymptotic behaviour of the EMI in the high-SNR regime, i.e., p → ∞.The main results are summarized as follows. Īrician Corollary 13.The asymptotic EMI in the high-SNR regime can be expressed in the following form: where d X ,min ≜ min q̸ =q ′ |x q − x q ′ |.For an equiprobable square M -QAM constellation, the asymptotic EMI in the high-SNR regime can be expressed as follows: where .This fact means that to achieve the same EMI, in Rician fading much less power resources are required than in Rayleigh fading. • Numerical Results: To further illustrate the obtained results, in Fig. 35, we plot the EMI for USW LoS channels achieved by equiprobable square M -QAM constellations versus the transmit power, p.The simulation results are denoted by markers.As can be observed in Fig. 35, the approximated results are in excellent agreement with the simulated results.The simulation parameters used to generate Fig. 28 are the same as those used to generate Fig. 28.By comparing the results in both figures, we find that to achieve the same EMI, in Rician fading much less power is required than in Rayleigh fading, which supports the discussion in Remark 22.To illustrate the ROC of the EMI, we plot H p X − Īrician X versus p in Fig. 36.As can be observed, in the high-SNR regime, the derived asymptotic results approach the numerical results.Besides, it can be observed that lower modulation orders yield faster ROCs 7 . 8) Summary of the Analytical Results: For convenience, we summarize the analytical results for the OP, ECC, and EMI in Table VII.Despite being developed for MISO channels, the expressions given in Table VII also apply to other types of channels subject to single-stream transmission, such as singleinput multiple-output (SIMO) channels, single-stream MIMO channels, and multicast channels.The only difference lies in the statistics of the channel gain.For example, the received SNR for single-stream MIMO channels can be written as γ St−MIMO = γa St−MIMO with the channel gain given by [120] where w and v are the beamforming vectors utilized at transmitter and receiver, respectively.As another example, the received SNR for a K-user MISO multicast channel can be 7 By comparing Fig. 28 and Fig. 35, we find that the EMI curves for Rician fading are similar to those for Rayleigh fading.Hence, we omit corresponding numerical results for the EE-SE tradeoff and a PSK-QAM comparison for brevity. written as γ Multicast = γa Multicast with the channel gain given by [127] a where w is the transmit beamforming vector, and h k is the channel vector of user k.After obtaining the PDF and CDF of a St−MIMO or a Multicast , one can directly leverage the expressions in Table VII to analyze the OP, ECC, and EMI for the corresponding channels.We also note that despite requiring a more complicated analysis than MISO channels, the OP, ECC, and EMI achieved in MIMO channels still follow the standard high-SNR forms given in (142) (or (174)), (146), and (158) (or (181)), respectively. C. Discussion and Open Research Problems We have analyzed several fundamental performance evaluation metrics for NFC for both deterministic and statistical near-field channel models.It is hoped that our established analytical framework and derived results will provide in-depth insight into the design of NFC systems.However, there are still numerous open research problems in this area, some of which are summarized in the following. • Information-Theoretic Limit Characterization: Understanding the information-theoretic aspects of NFC is vital for practical implementation.NFC differs from conventional FFC with regard to the channel and signal models, and thus further efforts are required to explore the information-theoretic limits of NFC.For SPD antennas, most information-theoretic results developed for FFC also apply to NFC if the channel model is adjusted accordingly.However, this is different for CAP antenna-based NFC.In an NFC channel established by CAP antennas, determining the information-theoretic limits and design principles has to be based on continuous electromagnetic models, which gives rise to the interdisciplinary problem of integrating information theory and electromagnetic theory.Fundamental research on this topic deserves indepth study. • Network-Level Performance Analysis: In practice, NFC will be deployed in multi-cell environments.As the density of wireless networks increases, inter-cell interference becomes a major obstacle to realizing the benefits of NFC.As such, analyzing NFC performance at the network level and unveiling system design insights with respect to interference management is crucial.Multicell settings yield more complicated wireless propagation environments.For example, the near field of one BS may overlap with another BS's far field or near field. Analyzing the NFC performance in such a complex communication scenario is challenging, and constitutes an important direction for future research. V. CONCLUSIONS This paper has presented a comprehensive tutorial on the emerging NFC technology, focusing on three fundamental aspects: near-field channel modelling, beamforming and antenna architectures, and performance analysis.1) For nearfield channel modelling, various models for SPD antennas were introduced, providing different levels of accuracy and complexity.Additionally, a Green's function method-based model was presented for CAP antennas.2) For beamforming and antenna architectures, the unique beamfocusing property in NFC was highlighted and practical antenna structures for achieving beamfocusing in narrowband and wideband NFC were highlighted, along with practical beam training techniques.3) For performance analysis, the received SNR and power scaling law under deterministic LoS channels for both SPD and CAP antennas were derived, and a general analytical framework was proposed for NFC performance analysis in statistical multipath channels, yielding valuable insights for practical system design.Throughout this tutorial, we have identified several open problems and research directions to inspire and guide future work in the nascent field of NFC.As NFC is still in its infancy, we hope that this tutorial will serve as a valuable tool for researchers, enabling them to explore the vast potential of the "NFC Golden Mine". APPENDIX A PROOF OF LEMMA 1 The transmit-mode polarization vector at the transmit antenna essentially represents the normalized electric field.The electric field E(r, s) ∈ C 3 generated in r from s is where J(s) = J x (s)û x + J y (s)û y + J z (s)û z is the electric current vector (with ûx , ûy , and ûz representing the unit vectors in the x, y, z directions) and G(r − s) ∈ C 3×3 is the Green function which is given as where µ 0 is the free space permeability, ω is the angular frequency of the signal, and k 0 is the wave number.Mathematically, the Green function can be further expanded as follows To guarantee ∥x∥ ≫ λ, the electric field should not be observed in the reactive near-field, which is a mild condition for practical systems.Henceforth, we assume ∥x∥ ≫ λ and directly exploit (189).In this case, the normalized electric field generated in r from s is given by The proof is thus completed. APPENDIX B PROOF OF LEMMA 2 To determine the depth of focus, the value of we can obtain the following approximation: where C(η) = Thus, the depth of focus is given by The proof is thus completed.The proof is thus completed. APPENDIX D PROOF OF THEOREM 1 The effective power gain from the (m, n)th transmit array element to the receiver satisfies denotes the surface region of the (m, n)-th array element, e a • ds is the maximal value of the effective antenna area in the x-z plane located around s, and h i (s, r) is the complex-valued channel from a point source located in s to the receive point r.Recalling that the system operates in the radiating near-field region, i.e., r ≫ λ, and the size of each individual element √ A is on the order of a wavelength, we have r ≫ √ A. Here, r ≫ √ A means that the variation of the complex-value channel h i (s, r) across different points s ∈ S m,n is negligible.Hence, we can rewrite (205) as follows: Next, we note that the influence of the effective aperture loss and the polarization loss was not considered in (35).Thus, to obtain a general expression of the received SNR, we should add the effective aperture loss and polarization loss terms to (35), which yields |h U (s m,n , r)| 2 = G 1 (s m,n , r)G 2 (s m,n , r)G 3 (s m,n , r). (208) In (208), G 1 (s m,n , r) (given by Eq. ( 37)), G 2 (s m,n , r) (given by Eq. ( 38)), and G 3 (s m,n , r) ≜ 1 4π∥sm,n−r∥ 2 model the effective aperture loss, the polarization loss, and the freespace path loss, respectively.Under the USW channel model, the powers radiated by different transmit antenna elements are affected by identical polarization mismatches, projected antenna apertures, and free-space path losses [39].Thus, it follows that |h U (s m,n , r)| 2 = G 1 (s 0 , r)G 2 (s 0 , r)G 3 (s 0 , r), where s 0 denotes the center location of the transmit UPA.In the considered system, we have s 0 = 0, from which (111) follows directly.The proof is thus completed. APPENDIX E PROOF OF THEOREM 2 Note that the influence of the effective aperture loss and the polarization loss was not considered in (36).To obtain a general expression for the received SNR, we should add the effective aperture loss and polarization loss terms back into (36), which yields ( Under the NUSW channel model, the powers radiated by different transmit antenna elements are affected by the same polarization mismatches and have the same projected antenna apertures but have different free-space path losses [39].Therefore, we have .This rectangular area is further partitioned into N x N z subrectangles, each with equal area ϵ 2 .Since ϵ ≪ 1, we have f 1 (x, z) ≈ f 1 (nϵ, mϵ) for ∀(x, z) ∈ {(x, z)|(n − 1 2 )ϵ ≤ x ≤ (n + 1 2 )ϵ, (m − 1 2 )ϵ ≤ z ≤ (m + 1 2 )ϵ}.Based on the concept of double integrals, we obtain Since Ω ∈ [0, 1] and Φ ∈ [0, 1], (217) reduces to the following form as N x , N z → ∞: As shown in Fig. 37, the integration region in (218) is bounded by two disks with radius R 2 = ϵ 2 min {N x , N z } and R 1 = In the general channel model, the powers radiated by different transmit antenna elements are affected by different free-space path losses, projected antenna apertures, and polarization mismatches.Consequently, for (m, n) ̸ = (m ′ , n ′ ), we have Inserting ( 226) into ( 110) yields (115).The proof is thus completed. APPENDIX H PROOF OF COROLLARY 3 Based on (37), (38), when ρ w (r) = ρ = Ĵ(s m,n ) = [1, 0, 0] T , ∀m, n, we have (230) Following the same approach as for obtaining (217), we approximate γ General as follows: where s 0 denotes the location of the center of the transmit CAP surface.In the considered system, we have s 0 = 0, from which (123) follows directly.The proof is thus completed. APPENDIX J PROOF OF LEMMA 4 The channel gain ∥h∥ 2 satisfies ∥h∥ 2 = hH R h. (237) To facilitate the derivation, we perform eigenvalue decomposition of R and obtain R = U H ΛU. Matrix U is a unitary matrix with UU H = I, and Λ = diag{λ 1 , . . ., λ r R , 0, . . ., 0}, where {λ i > 0} r R i=1 are the positive eigenvalues of R. Consequently, we obtain Since UU H = I, we have which yields U h = [ h1 , . . ., hN ] T ∼ CN (0, I).Then, we have Since { hi } N i=1 contains N i.i.d.complex Gaussian distributed variables, {|x i | 2 } M i=1 contains N i.i.d.exponentially distributed variables each with CDF 1 − e −x , x ≥ 0.Then, by virtue of [129], the PDF of r R i=1 λ i |x i | 2 is given by (138).The CDF of ∥h∥ 2 can be further calculated by we rewrite (256) as follows 1 h1 , . . ., h r R + λ The proof is thus completed. APPENDIX O PROOF OF COROLLARY 11 According to [131,Eq. (2.16)], the PDF of |CN (h i , λ i )| 2 is given by where I n (•) is the n-th-order modified Bessel function of the first kind.By [124, Eq. (10.25. 2)], we have The Laplace transform of f |CN (hi,λi)| 2 (x) is derived as By substituting (259) into (260) and calculating the resulting integral with [100, Eq. (3.326.2)],we have When s → ∞, we obtain Since the {CN (h i , λ i )} r R i=1 are mutually independent, the Laplace transform of ã = Thus, by performing the inverse Laplace transform of (264) and referring to [132], the PDF of ã, i.e., f ã(x) for x → 0 + can be obtained as It follows that the CDF of ã for x → 0 + satisfies When p → (2 R −1)σ 2 ã0 , we have Wave Based Channel Model for SPD Antennas B. Non-Uniform Model for SPD Antennas C. Green's Function-Based Channel Model for CAP Antennas E. Near-Field Beam Training D. Discussion and Open Research Problems Fig. 2 . Fig. 2. Condensed overview of this tutorial and outlook on NFC. 1 )Fig. 4 . Fig.1highlights the primary distinction between near-field and far-field channels for SPD antennas.Specifically, far-field channels are characterized by planar waves, whereas near-field channels are characterized by spherical waves.Consequently, for far-field channels, the angles of the links between each antenna element and the receiver are approximated to be identical.In this case, the propagation distance of each link linearly increases with the antenna index, resulting in a linear phase for far-field channels.However, for near-field channels, each link has a different angle, leading to a non-linear phase.In the following, we review the near-field spherical-wave-based channel models for MISO and MIMO systems, and highlight the primary differences compared to the far-field planar-wavebased channel model.1)MISO Channel Model: Let us consider a MISO system that comprises an N -antenna transmitter, where N = 2 Ñ + 1, and a single-antenna receiver.The antenna index of the Fig. 10 . Fig. 10.Illustration of Rayleigh distance, uniform-power distances, and Fresnel distance, where the BS is equipped with a ULA with N = 257 antennas and operates at a frequency of 28 GHz.The antenna spacing is set to d = λ 2 = 0.54 cm.The aperture of the ULA is D = (N − 1)d = 1.37 m A General Model of Near-Field Channels Fig. 11 . Fig. 11.Correlation of array response vectors for different sizes of the antenna array with half-wavelength antenna spacing in a system operating at frequency 28 GHz. Fig. 12 . Fig. 12. Depth of focus of a beamformer focused at difference distance r in direction θ = π 2 .Here, we assume the BS is equipped with an ULA with N = 257 antennas and operates at a frequency of 28 GHz.The antenna spacing is set to half-wavelength.Therefore, we have r R ≈ 350 m and r DF ≈ 35 m. θ = π 2 , r DF is given by Lemma 3 . (Near-field beam split [58]) In wideband NFC, the beam generated by PS-based beamformer f RF = a * (f c , θ c , r c ) is focused on location (θ m , r m ) for subcarrier m, where RF denote the full-dimensional candidate data steams, analog beamformer, and digital beamformer, respectively.FS ∈ R N max RF ×N max RF represents the ON/OFF status of the switching network, and is given by and the inter-element distance is d, where d ≥ √ A. It is worth noting that when d = √ A, the SPD UPA turns into a CAP surface.The central location of the (m, n)-th BS antenna element is denoted by s m,n = [nd, 0, md] T , where n ∈ N x ≜ − Ñx , ..., Ñx and m ∈ N z ≜ − Ñz , ..., Ñz . Fig. 23 . Fig. 23.Comparison of SNRs for different channel models versus the number of antennas N for a UPA with SPD antennas.The system is operating at a frequency of 28 GHz.p σ 2 = 0 dB, Nx = Nz = Proof.Please refer to Appendix I. ■ By setting L x , L z → ∞, i.e., S CAP = L x L z → ∞, the asymptotic SNR for the USW model for CAP antennas is obtained and provided in the following corollary.Corollary 5.As L x , L z → ∞ (S CAP → ∞), the asymptotic SNR for the USW model for CAP antennas satisfies lim SCAP→∞ γ CAP USW ≃ O(S CAP ). defining Ñ = N −1 2 , the normalized array gain achieved by f RF = a * (f c , θ c , r c ) at location (θ, r) at subcarrier m is given by1 N a T (f m , θ, r)f RF = 1 N Ñ n=−Ñ e jπ(δn(θc,rc)− fm fc δn(θ,r)) array gain, we define a function G(x, y) = 1 N N n=1 e j(nx+n 2 y) .Then, the array gain can be written as G π(cos θ c − fm fc cos θ), −π( d sin 2 θc 2rc − fm fc d sin 2 θ 2r ) .It can be easily verified that the maximum value of G(x, y) is obtained when (x, y) = (0, 0).The location (θ m , r m ) that the beam focuses on for subcarrier m is the location that has the maximum array gain, i.e., cos θ c − fm fc cos θ m = 0 and d sin 2 θc 2rc − fm fc d sin 2 θm 2rm = 0. Thus, we have θ m = arccos f c f m cos θ c , r m = f m sin 2 θ m f c sin 2 θ c r c .(204) r) 2 = 2 =a p σ 2 e a |h i (s m,n , r)| 2 Sm,n ds = |h i (s m,n , r)| 2 Ae a ,(206)where the integral Sm,n ds returns the physical area of the (m, n)-th transmit array element.Thus, the received SNR for the USW model (i = U) is given by Ae n∈Nx m∈Nz |h U (s m,n , r)| 2 .(207) |h N (s m,n , r)| 2 = G 1 (s m,n , r)G 2 m,n , r)G 3 (s m,n , r). TABLE III COMPARISON BETWEEN NEAR-FIELD AND FAR-FIELD CHANNEL MODELS FOR SPD ANTENNAS. TABLE IV SUMMARY OF HYBRID BEAMFORMING ARCHITECTURES FOR NFC. BBFH S TABLE V COMPARISON OF SNRS OBTAINED FOR DIFFERENT NEAR-FIELD LOS CHANNEL MODELS. RMSE denotes the root mean square error caused by approximating the exact MI. TABLE VII SUMMARY OF THE ANALYTICAL RESULTS FOR STATISTICAL MULTIPATH CHANNELS.
2023-05-30T01:16:17.345Z
2023-05-28T00:00:00.000
{ "year": 2023, "sha1": "8136a6dbde8ac9b8d1422231bb192edeff8ac19b", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/8782661/8901158/10220205.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "25d5d582da9e72ac800e1d4827e5ab00e8ecce07", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
221341820
pes2o/s2orc
v3-fos-license
Big Brain Data Initiative in Universiti Sains Malaysia: Challenges in Brain Mapping for Malaysia Universiti Sains Malaysia has started the Big Brain Data Initiative project since the last two years as brain mapping techniques have proven to be important in understanding the molecular, cellular and functional mechanisms of the brain. This Big Brain Data Initiative can be a platform for neurophysicians and neurosurgeons, psychiatrists, psychologists, cognitive neuroscientists, neurotechnologists and other researchers to improve brain mapping techniques. Data collection from a cohort of multiracial population in Malaysia is important for present and future research and finding cure for neurological and mental illness. Malaysia is one of the participant of the Global Brain Consortium (GBC) supported by the World Health Organization. This project is a part of its contribution via the third GBC goal which is influencing the policy process within and between high-income countries and low- and middle-income countries, such as pathways for fair data-sharing of multi-modal imaging data, starting with electroencephalographic data. The Importance of Big Data in Brain, Mind and the Neurosciences Information has been the key to build better organisations and new developments. The organisation can optimally organise themselves to deliver the best outcomes if they have more information. That is why data collection is a significant part for every organisation (1). In healthcare, Big Data can be used in the prediction of diseases outcome, prevention of co-morbidities, mortality, premature deaths and disease development, improve treatment and quality of life and reduce the cost of medical treatment (7). Today, patients want to participate in their health decision-making about their healthcare options or choices. This Big Data will help patients with providing up-to-date information to assist them to make the best decision and to comply with the medical treatment. In brain, mind and neurosciences, Big Brain Data can lead to important discoveries; particularly, it can help us to understand the brain's structure and function, identify new biomarkers of brain pathology and increase the performance of neurotechnological devices (such as brain-computer interfaces [BCIs]) (3). This growing Big Data would also be of much use for the advanced machine learning algorithms, specifically artificial neural networks for deep learning and related methods. Introduction The Fourth Industrial Revolution loads our life with tons of data (1). These data that require routine collection, storage, processing and analysis (2). The support of technological advances, it has led to the concept of 'Big Data' that describe data that is large and massive (1). An Overview of Big Data Definition The needs of Big Data analytics is steadily increasing in various field as it can be used in remodel operational processes of an organisation. The term of Big Data refers to a collection of large volume of both structured and unstructured data for the purpose of identifying unknown relationships or other informative attributes by using computational analysis that often involving advanced machine learning algorithms (3). It is an information assets that is often characterised by three properties which are high volume, velocity and variety. It requires specific technologies and analytical methods for its transformation into meaningful values (4). The challenges when dealing with Big Data include capture, curation, storage, search, sharing, transfer, analysis and visualisation (5). Although Big Data does not associate to any specific volume of data, Big Data storage often involve terabytes, petabytes and even exabytes of data captured over time (6). In health research, there is a specific definition of Big Data proposed by the Health Directorate of the Directorate-General for Research and Innovation of the European Commission. The definition Big Data in health encompasses high volume, high diversity biological, clinical, environmental and lifestyle information collected from single individuals to large cohorts, in relation to their health and wellness status, at one or several time points (2). and future research and finding cure for neurological and mental illness. Malaysia is one of the participant of the Global Brain Consortium (GBC) supported by the World Health Organization. This project is a part of its contribution via the third GBC goal which is influencing the policy process within and between high-income countries and low-and middle-income countries, such as pathways for fair data-sharing of multi-modal imaging data, starting with electroencephalographic data. Keywords: Big Data, Big Brain Data, brain mapping, neurosciences, initiative, neuroimaging Editorial | Big Data Initiative: challenges in brain mapping to 28th February 2020 at Varadero, Cuba in order to improve the international standards of the USM Big Data Project as shown in Figures Currently, the project is on the testing phase. Figure 6 shows the data pre-processing (curation) pipeline that had been established to understand the formats of data and the degree of aggregation needed from the various sources of input. Meanwhile, Table 1 shows the latest progress on Big Brain Data Initiative of USM. Benefits of Big Brain Data to Researchers/Clinicians in Malaysia and the World There are several benefits to researchers/ clinicians of Big Brain Data such as for our better understanding of the brain's anatomy including structure and it's function, identifying new biomarkers of brain pathology, neurotechnological devices performance improvement for example BCIs (3) and opportunity to explore and implementing of 'internet of things' (IOT) in health industries (1). The Big Data project on retrospective and prospective data for data hospital (pilot neuro) in Universiti Sains Malaysia (USM) was started on the 14th August 2017 (8). It was led by the Brain and Behaviour Cluster, School of Medical Sciences (previously known as Centre for Neuroscience Service and Research) in collaboration with the School of Computer Sciences with three objectives. The main objective was to establish a centralised clinical data storage platform for future references and research. The other two objectives are to create digital (paperless) access and data availability that can be accessed at university level by students, lecturers, clinicians and researchers. This project was developed to contribute to the third Global Brain Consortium (GBC) goals which was influencing the policy process within and between high-income countries and lowand middle-income countries, such as pathways for fair data-sharing of multi-modal imaging data, starting with EEG (9). GBC is a global collaboration of brain, mind and neuroscientists across borders and disciplines to build a fluid and connected global research community that is better able to advance equitable solutions to priority health challenges worldwide (10). Figure 6. The data pre-processing (curation) pipeline © USM (8) starts with collection of records (medical records and scanning records) and followed by collection and aggregation, disease classifications, attribute selection, data anonymity, data transformation into international neuroinformatics coordinating facility standard and neuro data repository iii) Speed -currently there is no technology can support large-scale simulations faster than in real time, normally models run slower. It is because human's brain processes such as development and learning took years or decades to perform. iv) Integration -combination of smaller models of brain regions are needed in order to model functions that involve brain-wide networks. Top-down and bottom-up models must be integrated, for example, those that cast the brain as a hypothesis testing system and biophysical model that represent simulations (14)(15)(16)(17)(18)(19). Conclusion The Big Brain Data Initiative in USM with objectives to establish a centralised clinical data storage platform for future references and research, creating digital (paperless) access and data availability which can be access at university level by students, lecturers, clinicians and researchers is reaching to the testing phase. Moving to the next stage is to implement the brain mapping techniques for understanding the molecular, cellular and functional mechanism of the brain. Four commons challenges will be faced by researchers are scale, complexity, speed and integration. The braintechniques and computer technologies should play their roles simultaneously in order to overcome the challenges. Funds None. An Overview of Brain Mapping Brain mapping techniques have proven to be vital in understanding the molecular, cellular and functional mechanisms of the brain. Normal anatomical imaging can provide structural information on certain abnormalities in the brain. However there are many neurological disorders for which only structure studies are not sufficient. In such cases it is required to investigate the functional organisation of the brain. Brain mapping techniques can help in deriving useful and important information on these issues (11). According to the Society for Brain Mapping and Therapeutics (SBMT), United States of America, brain mapping is clearly defined as the study of the structural and functional of the brain and spinal cord through the use of imaging. Mapping brain structural and functional connections through the whole brain is important for understanding brain mechanisms and the physiological bases of brain diseases (12). The development of highresolution neuroimaging and multielectrode electrophysiological recording can be considered as a part of brain mapping that can help brain, mind and neuroscientists to map what the brain looks like as disease progress or as treatments work. The commonly neurotechnologies used are positron emission tomography and fMRI along with EEG, electrocorticography, MEG and, most recently, optical imaging with near-infrared spectroscopy (13). The Challenges in Brain Mapping Four common challenges in brain mapping has been identified which are scale, complexity, speed and integration. i) Scale -large numbers of neurons and synapses, human brain simulation would push exascale computers to work beyond limit. ii) Complexity -almost limitless set of parameters are require to produce a biologically faithful simulation of the brain. Brain's extracellular interactions and molecular-scale processes such as receptor binding are examples of details which not incorporated into simulation models.
2020-08-20T10:08:58.662Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "cc1ce25e21a347cd811e529976637f6e8eb0d359", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21315/mjms2020.27.4.1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21dfd87fc97cc404eebe60bbadd8f9535329c74e", "s2fieldsofstudy": [ "Medicine", "Psychology", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
226379799
pes2o/s2orc
v3-fos-license
The Stochastic Effect of Nanoparticles Inter-Separation Distance on Membrane Wettability During Oil/Water Separation Engineers and scientists are faced with a major challenge of developing predictive models of nanoparticles scattering during membrane coating for efficient oil/water separation. Mechanical molecular simulation studies of key parameters or variable for structure/property correlations during nanoparticles coating are analysed on surface energy driven separability in the current study. The tools of stochastic process were used to study the random nature of nanoparticles scattering during membrane coating process using a specific coating technique. The results obtained in this study revealed theoretical facts that were validated experimentally. It was shown that there is a critical nanoparticles scattering that offers optimal membrane wettability. It was also revealed that total membrane coating doesn’t leads to optimal nanoparticle scattering during coating. It was also observed that as nanoparticles inter-separation distances decreased, membrane wettability increased. Clusters were also observed on the membrane surface during high and low pressure coating which impacted wettability. The cluster negatively impacted orientation of nanoparticles and wettability. It was shown that there is an optimal nanoparticles inter-separation distance which gave optimal membrane wettability after 3 HP round of coating. Different orientation of nanoparticles, size, shape, spatial distribution and morphology were also observed to impact membrane wettability. Clusters were also observed on the membrane and more clusters were observed in LP coating when compared with HP coating. These clusters greatly impacted membrane wettability negatively. Good correlation was observed from the SEM images, EDS and Descriptive statistics of the amount of elements in the surface layer formed after different coating rounds. The results obtained in this study revealed theoretical facts that were validated experimentally. It was shown that there is a critical nanoparticles scattering that offers optimal membrane wettability. It was also revealed that total membrane coating doesn't leads to optimal nanoparticle scattering during coating. It was also observed that as nanoparticles inter-separation distances decreased, membrane wettability increased. Clusters were also observed on the membrane surface during high and low pressure coating which impacted wettability. The cluster negatively impacted orientation of nanoparticles and wettability. It was shown that there is an optimal nanoparticles inter-separation distance which gave optimal membrane wettability after 3 rd HP round of coating. Different orientation of nanoparticles, size, shape, spatial distribution and morphology were also observed to impact membrane wettability. Clusters were also observed on the membrane and more clusters were observed in LP coating when compared with HP coating. These clusters greatly impacted membrane wettability negatively. Good correlation was observed from the SEM images, EDS and Descriptive statistics of the amount of elements in the surface layer formed after different coating rounds. Keywords: Nanoparticle, inter-separation distance, coating, surface tension and surface energy. Introduction There is no standard technique of nanoparticle coating when using spray guns (1)(2)(3)(4). Sometimes, by using the spray gun, an appropriate coating thickness which is homogeneous with proper nanoparticle inter-separated distances and films are created (1)(2)(3)(4). It has been reported that the jet impact momentum of droplets during coating depends on the coating pressure (1)(2)(3)(4)(5). There is high and low impact momentum when using the jet gun (1)(2)(3)(4)(5). In high pressure coating the droplet penetrates into pits and scratches thus (1)(2)(3)(4). The coating droplet depositions are also affected by air flow field and also by the uncontrolled gun-to-target distance due to different operator skill level (1)(2). This has led to the current problem of coating deficiencies, thereby limiting the efficient transfer (of what?????), which influencing the film thickness distribution and nanoparticle scattering (1)(2)(3)(4)(5)(6). There are other problems during coating such as strong shoreline winds which affect membrane coating process and surface properties (1)(2)(3)(4)(5). It was reported by (1)(2)(3)(4)(5)(6) that to understand coating processing behaviour, it is important to carry out a detail research investigations on nanoparticle scattering, nano-particle spatial distribution, morphology of nano-particles, nano-particle sizes, nano-particle shape and nano-particle thickness during high pressure (HP) and low pressure (LP) coatings. Little research on particle scattering has been carried out by Plesniak et al. [2] and Ye et al [1] on wind effect during coating. Plesniak et al. [2] focused on spray transfer efficiency (TE) and the effect on spray parameters during booth coating. Parameters such as mass flow rate, gun-totarget distance and gun-to-target angle during coating were investigated (1-6 & 8-25). Few research works gave clear correlation between theoretical and experimental data (1-6 & 8-24). Experimental and numerical research investigation on different atomizers such as high-speed rotary bells having electrostatic support system, pneumatic application using coaxial jet type atomizers, as well as powder coating (1) were carried out at Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) [1 & 4-6]. Parameters such as film thickness distribution and efficiency under varying conditions, and varying boundary conditions for conventional materials were analysed. The research studies presented great contribution to experimental results of spray coating using flat jet gun (1). The experimental results also gave the necessary boundary conditions for the relevant trajectory during coating. Different surface roughness were reported to impact membrane wettability due to varying spatial distribution, varying sizes, varying shape, varying morphology, varying orientation and varying inter-separation distances (1-5 & 8-25)). Computed film thickness distribution and transfer of efficiency during the coating process were compared with experimental results (1). Parametric studies were done during numerical simulations and the effects of side winds and gun-to-target distances on film thickness were analysed (1). It was observed that materials coating are greatly affected by winds during the coating process. Therefore the high pressure (HP) coating and low pressure (LP) coating have different impact on spatial distribution, sizes, shape, morphology, orientation and inter-separation distances (1-5 & 8-25). These have different impact of membrane surface roughness and smoothness which impact surface energy and surface tension driven separability. The lotus effect, which is a traditional model on surface wettability, justified that smooth surface enhanced wettability and that rough surface decreased wettability (1-5 & 8-25). Most of the existing models used in wettability only takes into consideration the impact of the external parameters during coating such as diameter of the jet spray, coating pressure, impact momentum, coating distances and angle of coating to study wettability (1-6 & 8-25). These parameters are insufficient since there are several ignored parameters during coating. This has resulted to the fabrication of membrane surface that are inefficient during oil and water separation . Therefore to design a membrane surface that is more efficient during oil/water separation, all ignored parameters must be taken into consideration. These parameters are the random natures of nanoparticle scattering, nano-particle spatial distribution, morphology of nano-particles, nanoparticle sizes, nano-particle shape . In this current study more physical parameters that impact membrane wettability and inter-separation distances are taken in to consideration. This has led to the design of a new membrane surface with improved efficiency of oil/water separation. 2.2. Glass hydrophobic nanoparticles and spray gun were purchased for the experiment. The glass materials were washed to remove foreign impurities such as dirt that could have prevented proper blending of nanoparticle on the glass surface during coating (1-6 & 8-25). This was done with the help of a distilled water and pre-clean water. The washed glass membranes were allowed to dry for 24 hours under room temperature. The coating process was done using HP coating and LP coating. The jet spray gun used for coating was kept 5 cm away from the membrane surface at an angle perpendicular (90 degree). Before coating took place, uncoated sample (membrane material i.e the glass material and nano4glass) was taken for for microscopy analysis. The first, second and third rounds of coatings were done at LP and samples after each round were removed for microscopy analysis. Similarly first, second and third rounds of coatings were done at HP and (coated glass material) were removed for microscopy analysis. The second and third rounds of coating were done in less than three minutes to prevent the membrane surface from becoming hydrophobic to the coated materials which will repel the second and third coating rounds. The glass membranes without coating (control glass) were characterized using ESD to detect their elements. The following elements were observed as shown in Fig. 1(b), Oxygen (O), Mg, Si, and Ca. The purchased hydrophobic nanoparticles used for membrane coating were also characterized using EDS and the following element for glass hydrophobic nanoparticles were observed as shown in Fig. 1 (c), O, F, Na, Si, S and Ca. The hydrophobic nanoparticles for glass used in membrane coating were having unique elements of fluorine (F), Na, and S and the glass control sample were having a unique element of Mg. Since the main aim of the current study was to investigate membrane interseparation distances due to nano-particle size, spatial distribution of nanoparticle, morphology of nano-particles, nano-particle shape and nano-particle thickness which impacted inter-separation distances during HP and LP coating. Microscopy analysis was done to analyse the random phenomena of these parameter during HP and LP coating rounds for the establishment of an empirical model of surface tension and surface energy and their impact on wettability. The samples preparation for SEM, TEM and EDS 2.3.1 The samples were not polished since the surface roughness of the coated hydrophobic nano for glass was the main parameter to be measured. The sample was embedded in epoxy resin blocks and later the thin section to be analysed was prepared. The holders in which the glass sample were placed for microscopy analysis was 25mm or (1") diameter round. The glass samples are electrically non-conducting during analysis and a conducting surface coated was applied to provide proper path for the incident electrons to flow to ground during analysis. Normally the coating material is vacuum-evaporated carbon (~10nm thick), having a minimal influence on X-ray intensities due to its low atomic number as specify for SEM sample so that it does not add unwanted peaks to the X-ray spectrum. To achieve higher resolution during SEM imaging, advanced detectors were used during SEM analysis. These were used to selectively detect the different location as indicated by spectrum 2 to spectrum 8 called site of interest where the lens was able to capture results. This was to ensure accuracy and elementary validation of findings on how particles were distributed on the membrane surface during coating as shown in Fig 1 (e). During the SEM analysis, the detector used was an In-Lens SE detector (Zeiss Supra 40, FE-SEM, Oberkochen, Germany). It must be noted that the In-Lens was only able to pick images in a straight path. Therefore the In-Lens was unable to pick images in curve section of the glass membrane and as such the sections were black in the SEM captured images. The nanoparticles sizes, shape, orientation, morphology and dispersion of lateral dimensions were measured. It should be noted that the STEM detector being placed under the samples was used to capture images in transmission mode in the SEM during experiment. This consists of sample holders which guide the transmitted electrons onto the electron multiplier in the form of a gold plate under the bright field. All the transmitted electrons are collected by the E-T detector. At the same time the screening ring being operated prevented the X-rays being emitted by the sample to reach the EDS detector and therefore it is important to remove the ring before an EDS analysis. More so a TEM grid transmission setup was used and the TSEM detector was able to analysed four samples on the holders and EDS analysis was carried out immediately. The various images of SEM and EDS configurations were captured for LP and HP. The coating thickness, surface spread, roughness, smoothness, contact angles, interseparation distances, size, morphology, spatial distribution were observed and measured using SEM, image J particle analyser and energy dispersive X-ray spectroscopy. The viscosities of nanoparticle scattering were measured at room temperature using a rheometer (physica MCR301, Anton Paar Gmbh Graz, Austria In the current study the following EDS detectors were used to analyse a 10 mm 2 glass, coated glass and hydrophobic nanoparticles (Thermo Scientific, USA), a 10 mm 2 , SDD (Bruker, Germany), a 100 mm 2 , SDD (Thermo Scientific, USA), with an annular 60 mm 2 FlatQUAD SDD (Bruker, Germany). The SDD annular is being inserted between the pole shoe and the experimental sample, to give a very large solid angle of the X-rays being emitted by the sample. For the TEM analysis the sample were not etched or polish since the coated nanoparticles was on the surface of the membrane. A standard TEM thin foil 3mm in diameter were prepared for analysis by electrolytic twin-jet (at −30 • C, 30 V) in Struers Tenupol 2 filled with 6% solution of perchloric acid in methanol. This observations were all carried out at 200 kV with JEOL JEM 2000FX microscope equipped with an X-ray energy dispersive spectrometer (XEDS) 53 LINK AN 10 000 (26). The diameter, length, orientation, angles, morphology, spatial distribution of the coated nanoparticles on the membrane surface was measured as shown in Fig 1 (d). It is observed from hydrophobic nano4glass control samples that six main elements are found as shown in table 2. These elements are oxygen (O), Fluorine (F), sodium (Na), silicon (Si), Sulfur (S) and calcium (Ca). It is observed as shown in table 2 that there is a new element Fluorine (F) which is not found in glass control samples. Fluorine (F) is the main elements of hydrophobic nanoparticles which created membrane hydrophobicity during oil/water separation process. Therefore F was the main scattering elements which vary during HP and LP coating rounds. Therefore the morphology, sizes, shape, orientation and spatial distribution of F was changing during the coating rounds resulting to different inter-separation distances which impacted surface tension and surface energy. It now important analyse the In Figure 4, the SEM and EDS results of the membrane surface layer formed on glass after 1 st LP hydrophobic nanoparticle coating are presented. All SEM photos show different spread of surface density of F, O, Al, Si, and K on the glass membrane surface. Additionally, clusters were observed on the reference image and mix F element. The inter-separation distances are bigger and morphology, spatial distribution, orientation, sizes and shape of F keep changing. This may indicate high surface roughness which doesn't improvement membrane wettability when related to the lotus effect on surface wettability. The lotus effect on surface wettability stated that the coated membrane surface must be smooth as smooth surface enhanced wettability. Though the inter-separation are smaller in second pressure high coating the increase in clusters in 2 nd round of coating when compared with 1 st round of coating shows 1 st HP coating will perform better than 2 nd HP coating during oil/water separation. In Figure 7 with proper nanoparticles orientation, morphology and distribution due to absence of cluster. Materials are called nanoparticles due to their reduced sizes and enhanced properties. It can be concluded that when nanoparticles size, orientation, morphology and distribution evolve with few clusters and smaller inter-separation distance, the wettability surface is more enhanced during oil/water separation. This is an indication that the membrane wettability is more enhanced in in 3 rd HP coating when compared with other coating round in HP and LP. It was necessary to derive mathematical models of the observed nanoparticles scattering on the membrane surface during coating process. Modelling membrane inter-separation distances during membrane coating To model nanoparticle coating inter-separation distance during membrane coating, it was observed through microscopy observation that the mean inter-particle distance of nanoparticle during coating is proportional to the size perparticle volume given as r = 1 n 1/3 . where n = P N V the nanoparticle surface density on the membrane is pores. Due to the spherical nature of nanoparticles size, the radius of the spherical nanoparticle can be given as (3/4πn) 1/3 . It was also observed from microscopy observation that if the nanoparticle is at a given distance, the probability of the nanoparticle from the distance of the origin where coating takes place is given as ( Where r is the size of nanoparticle, is the assumed nanoparticle inside the sphere having a volume given as (2) Equation (2) gives the change in nanoparticle sizes on membrane inter-separation distances during coating. From equation (2) it is important to study the effect of membrane inter-separation distance with respect to very small infinite change in particle inside sphere during coating. This is possible when taking into consideration the limit of a very small change in particle distance during coating as the interseparation distances decreases during coating as observed from the microscopy. The expression for this infinite change is given as →∞ = lim →∞ (1 + 1 ) . Therefor the limit defined the infinite change in inter-separation distance during nanoparticle coating. The infinite change in particle size on membrane interseparation distances is affected by adhesive energy or the bonding energy of the nanoparticle during coating. Membrane adhesive energy drives membrane nanoparticle in the membrane pore sizes. The membrane adhesive energy also affects the contact area between nanoparticle scattering, surface energy, surface tension, surface resistance and shear stress resistance during coating. These parameters affect the total force on the membrane during coating as shown in Fig. 9. Figure 9. Nanoparticle coating on solid membrane and adhesive energy during coating process. The energy of adhesion which is the energy released when nanoparticle liquid comes in contact with the membrane during coating can be defined by taking the different forces that affect membrane wettability as shown in figure 9. The membrane surface morphology (rough or smooth) which led to the frictional force that affect the surface energy in the membrane surface played a major role on how the spherical or closely nanoparticles flew through the membrane during coating. Another observed force during the coating process was the reaction force from the nanoparticle during coating. The forces acting on the membrane during coating are the force from nano-particle (Fnano) which lowers the membrane surface energy. There is the force of viscosity (Fviscosity) due to the flow of nanoparticle during coating. There is the force on pressure due to high pressure or low pressure during coating (Fpressure). There is the force on solid wall and pressure during coating (Fdown). There is the force on wall and oil (Fupward) and there is the force of friction (Ffriction), ℎ ℎ ( ), ( demineralised water). These forces are shown in the schematic in Fig.10. Figure 10. Schematic of jet impact propulsion during nano-coating process and membrane external and internal factors during nanoparticle coating process. During nanoparticle coating the jet impact and the jet propulsion depend on the following parameters (viscosity, velocity and the geometry of the membrane pore size). These parameters affect the nanoparticle thickness and membrane inter-separation distance during coating. Before membrane coating takes place, it was observed and assumed that the glass surfaces were smooth and the loss of energy due to the jet impact was zero. The velocity of nanoparticle at the outlet tip of the jet spray which was circular in diameter impacted the coated material with a velocity (V) and the forces being exerted by the spray gun to the nanoparticle along the x-axis, y-axis and z-axis are given as where the cross section area of the jet gun spray, is the density of nanoparticle coating in kg/m 3 , 1 , 1 1 are the initial velocities in the directions of jet spray gun and The total forces given in equation (6) affect the membrane inter-separation distances during coating. This also affect the contact area between nanoparticle coating, surface energy, surface tension, surface resistance and shear stress resistance, viscosity and the geometry of the membrane. Therefore it is very important to understand the effect of membrane inter-separation distances during coating. From first principle, surface energy is defined as the work done per unit area given by where W is the work done and A = 2prS is the surface area of the membrane channel and S the length of the membrane surface from which the surface energy is measured. Similarly the relationship that describes force per-unit length in a surface can be described using surface tension. The expression is given as where S1 is the external distance of the membrane external surface. The membrane surface energy and surface tension given by equation (7-8) depends on membrane forces and scattering of coated nanoparticles that lower surface energy to improve wettability. To improve membrane wettability the coated nanoparticles in the membrane surface must have the optimal inter-separation distances during nanoparticle coating. From expression (7)(8), the effect of the nanoparticle coating inter-separation distance on surface tension and surface energy can be inferred to study their effect on wettability. It should be recalled that the nanoparticles are coated on the membrane surface with some spacing between them called the inter-separation distances as established in equation (2) and this spacing affect membrane surface tension and surface energy which affect membrane wettability. Equations (1)(2)(3)(4)(5)(6)(7)(8) are solved simultaneously using Engineering Equation Solver software (F-Chart Software, Madison, W153744, USA) RESULTS AND DISCUSSION The proposed models derived in this paper are tested with the following data ρ = 1000 kg/m 3 , S1 = 0. The results shown in Fig.11 (a-f) shown that inter-separation distances greatly affect membrane wettability. It is revealed that the decrease in nanoparticle sizes during nanoparticle scattering which affects wettability, leads to increase in membrane surface energy i.e increase hydrophobicity during membrane coating during the initial process of nanoparticle coating. It is also revealed that the continuous decrease of the nanoparticles on the membrane surface during coating did not lead to increase in energy in the membrane surface during wettability since the membrane surface became smother or less rough due to varying nanoparticle scattering on the membrane surface which impacted wettability. The result revealed an increase and decrease in membrane interseparation distances during nanoparticle coating as shown in Fig. 11 (a). It was also revealed from Fig. 11 (b-c) that there was a critical nanoparticles size during coating below which continuous coating of the membrane surface with nanoparticles led to decrease in inter-separation distances and surface energy which impact membrane wettability or flow rate as shown in Fig.11 (e), leading to hydrophilic surface. The results revealed in this study showed a smooth transition from higher surface energy due to proper membrane interseparation distances which lower surface energy. There was also a maximum energy and maximum inter-separation distances during membrane coating where surface energy started decreasing leading to enhancement on wettability or flow rate, but with poor membrane separability since oil and water flew through the membrane surface leading to ineffective wettability. The reasons for the increasing-todecreasing surface energy and inter-separation distances with decreasing in nanoparticle sizes were explained to be due to nanoparticle scattering which changes the aperture roughness and smoothness. It could be proposed from this study that the optimal membrane separability during wettability where the oil mixture ratio was low was actually when the surface energy was highest during membrane wettability although the flow rate was low due to the fact that the surface was highly hydrophobic at that point due to poor nanoparticle scattering on the membrane surface with higher membrane interseparation distances. During nanoparticles coating process, when nanoparticles were coated on the membrane surfaces, particles scattered through the membrane surfaces creating surface roughness. The rough membrane surfaces created higher surface energy and their inter-separation distances were higher with negative impact on membrane wettability or flow rate. As more coating takes place specifically 2 rd and 3 rd coating, the roughness of the surfaces decreased due to deceased inter-separation distances thereby creating smoother membrane surfaces and the membrane surface energy started decreasing leading to improvement on wettability or flow rate. Therefore the result in this study indicated that the effect of roughness to smoothness due to change in membrane inter-separation distances was a consequence of continuous coating from 1 st to 3 th coating. Several researchers have described membrane roughness and smoothness by their contact angles (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14) and their impacts on membrane wettability. Few researchers have looked at membrane inter-separation distances and their effect on wettability which is the focus of this research. Smooth membrane surfaces are due to proper membrane coating process where the scattering of nanoparticles are acceptable. The nanoparticle scattering are uniformly distributed across the membrane surface thereby creating smooth surfaces which lower membrane surface energy and improve membrane wettability. Therefore, the random nature of nanoparticles scattering impacted surface energy driven separability of oil/water as revealed in Fig. 11 (a-f). It is also revealed from Fig. 11 (b) that decreases in aperture size and inter-separation distances on the membrane surfaces increases energy which negatively impacted membrane wettability. It was also revealed from Fig.11 (b) that continuous decrease in membrane aperture size, nanoparticle scattering on the membrane surface did not always leads to increase in surface energy during wettability as there existed a critical nanoparticles scattering on the membrane surface which led to decrease in membrane surface energy which increases membrane wettability or flow rate. This also revealed the effect of surface roughness and smoothness of nanoparticles scattering on wettability during nanoparticle coating. Rough scattering of nanoparticles on the aperture led to higher surface energy that lower membrane wettability and smooth scattering of nanoparticles on the aperture led to lower surface energy which increases membrane wettability as shown in Fig. 11 (d). It is also revealed from Fig. 11 (a) that, change in membrane pressure (force) during nanoparticle coating leads to increase in membrane surface energy since more rough surfaces were created, which negatively affect membrane wettability and also the purity of the separated oil and water. The impacts of membrane force during coating on wettability or flow rate are shown in Fig. 12 Figure 12 (a) revealed the relationship between membrane force during coating, nanoparticles sizes, aperture sizes, surface energy and surface tension during wettability. It is shown from Fig. 12 (a-b) that decrease in nanoparticle sizes and aperture sizes leads to increase in force during coating. The results in Fig. 12 (c) revealed that decrease in force during coating leads to increases in membrane energy to an optimal energy where membrane energy starts decreasing during the coating process. The results in Fig. 12 (b) revealed that increase in force during membrane coating leads to increase in surface tension. The current research findings have revealed results, which other researchers have not reported on nanoparticles scattering during membrane coating. Optimal membrane inter-separation distances and aperture sizes during nanoparticle coating have been revealed. The derived theoretical model results of nanoparticles scattering were experimentally validated wettability test. There was a need to performed the real wettability experiments for the purpose of validation of the theoretical obtained results and SEM, EDS and statistical analysis in the current thesis. Figure 13 (a-b) membrane installations in progress (c) installation of proposed nanostructured membrane (d) experimental setup of Nanostructure Membrane technology used in oil/water separation (e) separated water from contaminated oil/water after experimentation. Experimentation of oil/water separation to validate the newly design nanostructured membrane A centrifugal pump was used to pump oil/water mixture from the storage tank through the nanostructured membrane to the final point were only pure water was collected as shown in Fig. 13 (e). The gauge pressure was at 180 kPa during oil/water separation and the rate of flow of water during pump operation was 10 litres/s. The experiment was performed at room temperature and at a steady pressure supply by the centrifugal pump. Glass 1 st LP, 2 nd LP, 3 rd LP, 1 st HP, 2 nd HP and 3 rd HP was tested for validation of wettability surface. The different obtained results were sent for analysis chemical analysis at AMBIO laboratory to detect the level the impurities of oil that was not filtered by the membrane technology. This was also done to adhere to National Engineering practice and regulation given by the South Africa Engineering Council for pollution control and environmental safety. Oil and Grease Analysis Using US EPA Method 1664B with the SPE-DEX 1000 Oil and Grease System The extraction of oil (Petrol) from the purify water after experiments was done using US EPA method 1664B with the SPE-DEX 1000 oil system using the four main step (1) Prewet/Conditioning solvent: when the disc is Pre-wet with a solvent to make it ready for the sample. Step 2, Processing of sample: when the sample from its original sample bottle is process through the SPE-Disk. Step 3. Solvent rinse of sample bottle: a solvent is used to rinsing the bottle and to ensure the Step 4: Extraction of the SPE Disk: the disk is extracted and analysed in a small volume of solvent rather than in a large volume of water were they started. The pan is removed from the heat source (compressor). Any oil or grease will be evidence by a white spot on the pan when pure water must have been evaporated during the heating process. Then the pan was weight the pan and subtracts the initial anti pan value from the full pan value to get your gravimetric value. The following obtained results for glass LP and HP rounds of coating were obtained as shown in Fig.14. Figure 14 Wettability test for oil/water separation on glass LP and HP rounds. Table 1 and table 2 of Fig. 14 revealed the final laboratory results after oil and grease Analysis were performed. The determinant was oil in the contaminated water and oil that were filtered through the designed nanostructured membrane. The wettability of six membranes HP and LP were tested for validation purpose. The obtained values greater 1 revealed that oil was not detected in the filtered water. The obtained values less than 1 indicated that oil was detected in the filtered water after wettability test. Therefore these obtained results can be correlated with the different results of HP and LP coating rounds. It is observed as shown in Fig.1 that glass HP 3 show more efficient wettability when compared with all the glass coating rounds. It was important to compare the SEM images for validation of their surface properties. It was observed that 3 rd HP coating rounds have little or no clusters on its surface when compared with the other wetting rounds. Therefore it is clear that the presence of clutters is an indication of poor wettability. Glass 2 nd round LP coating revealed the membrane surface with poor wettability since few oil molecules were observed in the separated water. The reason for poor oil/water separation process as shown in 2 nd HP, 2 nd L and 3 rd LP is due to high surface roughness and more clusters being observed. More clusters were observed in these membrane surface and clusters are reported to increases surface roughness. These surfaces are also reported to have large nanoparticle inter-separation distances. Their orientation of nanoparticles, morphology of nanoparticle, spatial distribution of nanoparticle and nanoparticle size are affected by clusters. CONCLUSION AND RECOMMEDATION The current study was aimed at studying the stochastic effect of nanoparticles inter-separation distances and their impact on wettability during oil/water separation. To achieve this objective a new membrane surface was created and the interseparation distances were theoretically modelled and validated experimentally. It was revealed that there is an optimal nanoparticles inter-separation distance which gave optimal membrane wettability after 3 rd HP round of coating. Different orientation of nanoparticles, size, shape, spatial distribution and morphology were also observed to impact membrane wettability. Clusters were also observed on the membrane and more clusters were observed in LP coating when compared with HP coating. These clusters greatly impacted membrane wettability negatively. Good correlation was observed from the SEM images, EDS and Descriptive statistics of the amount of elements in the surface layer formed after different coating rounds. There was a need to do a theoretical simulation so as to do a proper correlation of experimental observed variable and theoretical observed variable. It was shown that decrease in nanoparticle sizes during nanoparticle scattering which affects wettability and leads to increase in membrane surface energy i.e increase hydrophobicity during membrane coating. It is also revealed that the continuous decrease of the nanoparticles on the membrane surface during coating did not lead to increase in energy in the membrane surface during wettability since the membrane surface became smoother or less rough due to varying nanoparticle scattering on the membrane surface which impacted wettability. The theoretical results showed a smooth transition from higher surface energy due to proper membrane inter-separation distances which lower surface energy. It was revealed that rough membrane surfaces created higher surface energy and their inter-separation distances were higher with negative impact on membrane wettability or flow rate. Therefore the theoretical obtained result revealed that the effect of roughness to smoothness due to change in membrane inter-separation distances was a consequence of continuous coating from 1 st to 3 th coating. Though membrane wettability were observed for smoother membrane surface the presence of cluster during LP and HP coating is a serious problem which need further investigation.
2020-10-29T09:08:55.382Z
2020-05-31T00:00:00.000
{ "year": 2020, "sha1": "2dc6885746b5156443054cf4d614ba3a71f4c381", "oa_license": null, "oa_url": "https://doi.org/10.37624/ijert/13.5.2020.842-866", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "79cd2ebf8d78b85a4057dff10e62d8afaefe24e7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
220487481
pes2o/s2orc
v3-fos-license
Implications of near-term mitigation on China's long-term energy transitions for aligning with the Paris goals In the international community, there are many appeals to ratcheting up the current nationally determined contributions (NDCs), in order to narrow the 2030 global emissions gap with the Paris goals. Near-term mitigation has a direct impact on the required efforts beyond 2030 to control warming within 2°C or 1.5°C successfully. In this study, implications of near-term mitigation on China's long-term energy transitions until 2100 for aligning with the Paris goals, are quantified using a refined Global Change Assessment Model (GCAM) with six mitigation scenarios. Results show that intensifying near-term mitigation will alleviate China's transitional challenges during 2030–2050 and long-term reliance on carbon dioxide removal technologies (CDR). Each five-year earlier peaking of CO2 allows almost a five-year later carbon neutrality of China's energy system. To align with 2°C (1.5°C), peaking in 2025 instead of 2030 reduces the requirement of CDR over the century by 17% (13%). Intensifying near-term mitigation also tends to have economic benefits to China's Paris-aligned energy transitions. Under 2°C (1.5°C), peaking in 2025 instead of 2030, with larger near-term mitigation costs by 1.3 (1.6) times, has the potential to reduce China's aggregate mitigation costs throughout the century by 4% (6%). Although in what way China's NDC is to be updated is determined by decision-makers, transitional and economic benefits suggest China to try its best to pursue more ambitious near-term mitigation in accordance with its latest national circumstances and development needs. Introduction In the Paris Agreement, the Parities to the United Nations Framework Convention on Climate Change (UNFCCC) collectively decide to "hold the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels" (UNFCCC, 2015). Responding to the achievement of such global goals, 184 Parties have submitted the nationally determined contributions (NDCs) to the UNFCCC secretariat during the last several years, in which their concrete near-term mitigation objectives until 2030 are determined. Previous studies (e.g., Fawcett et al., 2015;Robiou du Pont et al., 2017;Rogelj et al., 2016;UNEP, 2019) have assessed these national near-term objectives and declared that the global aggregate NDCs in 2030 fall short to meet cost-effective emissions levels consistent with well-below 2°C simulated by integrated assessment models (IAMs). Countries are therefore called for, by both the international climate change negotiations and the literature, to ratchet up the current NDCs, in order to narrow and possibly close the 2030 global emissions gap. Besides the NDC, the realization of the Paris goals by 2100, which are associated with very limited carbon budgets over the century (Clarke et al., 2014;Rogelj et al., 2018), also relies on the post-NDC, long-term mitigation (Rose et al., 2017). As the successor of the NDCs, to what degree the long-term mitigation is required will be closely related with how much mitigation has been achieved by 2030. At the global scope, several studies (e.g., Holz et al., 2018;Strefler et al., 2018) have recently discussed the potential influences of implementing more ambitious assumed near-term mitigation on long-term transitions to keep the Paris goals within reach. They concluded that intensifying mitigation before 2030 could help reduce long-term challenges and risks. To stay below 1.5°C, if global CO 2 emissions are further reduced by 30% from the NDC levels in 2030, the global requirement of carbon dioxide removal (CDR) technologies (represented often by bioenergy with carbon capture and storage (BECCS) in most models) in the second half of the century could be halved (Strefler et al., 2018). China is the biggest CO 2 emitter and energy consumer in the world at present. China's NDC, which was submitted in 2015, pledged a package of mitigation objectives and actions toward 2030. The main elements were determined as "to achieve the peaking of carbon dioxide emissions around 2030 and making best efforts to peak early; to lower carbon dioxide emissions per unit of GDP by 60% to 65% from the 2005 level; to increase the share of non-fossil fuels in primary energy consumption to around 20%" (NDRC, 2015). For China, 90% of emissions (excluding land use, land use change and forestry (LULUCF)) came from the energy system (UNFCCC, 2019). Energy system transition will be the primary means by which China reduces its emissions and contributes to the overarching global goals. The recent studies (e.g., Duan et al., 2018;Gallagher et al., 2019;Jiang et al., 2018Jiang et al., , 2018aLugovoy et al., 2018;Mi et al., 2017;Wang and Chen, 2018;Zhou et al., 2019) are concentrated on assessing China's NDC CO 2 trajectories until 2030 or 2°C-aligned energy system changes until 2050 by using a bottom-up modeling. They showed that China, with additional efforts, could be able to overachieve its NDC targets including to achieve an earlier CO 2 peaking. Well-below 2°C or 1.5°C is a goal by 2100. To align with the Paris goals, different near-term mitigation will have different implications on China's long-term energy transitions until 2100, and intensifying near-term mitigation will intuitively lower China's transitional challenges beyond 2030. With the global 'stocktake' adopted in the Paris Agreement (stated as "undertake its first global stocktake in 2023 and every five years thereafter"), national near-term mitigation objectives are being re-considered and might be updated in the coming years. The next-step choices China makes will have a long-lasting influence on the world's possibility to hold warming still within 2°C or 1.5°C by the end of the century. Although how China will ratchet up the NDC is ultimately determined by decision-makers, informing the degree to which China's near-term mitigation will impact its long-term transitional and economic challenges is useful to support the decisions. Most existing studies have only focused on investigating the achievement of the NDC itself. An important literature gap exists in assessing near-term mitigation implications on China's long-term transitions especially under 1.5°C. To fill in the literature gap, this study will apply an IAMa refined Global Change Assessment Model (GCAM), which is a participant of the China Energy Modeling Forum (CEMF) for this Special Issue "The Economic Feasibility and Trade-offs in Achieving China's Low-Carbon Transformation"to derive key implications of China's near-term mitigation before 2030 on its long-term energy transitions until 2100, in aligning with well-below 2°C and 1.5°C. By doing so, we hope to identify and provide some quantitative information to support China's NDC reconsideration and low-carbon transformation. The study could also provide a reference to other developing countries when considering the update of their NDCs and the alignment between near-term mitigation and long-term transitions. The remainder of this paper is organized as follows. Section 2 describes the methods and scenarios. Section 3 presents the results of China's energy transitions under different nearterm mitigation scenarios. Section 4 provides the main conclusions. Modeling framework To implement the analysis, the paper applies GCAM (version 4.0) as the modeling framework. As an open-source model (http://www. globalchange.umd.edu/gcam/), GCAM is originally developed by the Joint Global Change Research Institute and has been widely invoked in international energy and climate assessments including the Assessment Reports of the Intergovernmental Panel on Climate Change (IPCC). GCAM simulates dynamics of the global energy-economy-land-climate system until 2100 at a 5-year time step. It disaggregates the world into 32 geopolitical regions (and China is an individual region) which are linked through international trade. GCAM is a bottom-up model with the ability to comprehensively simulate regional energy systems (covering energy supply, conversion, distribution and demand) and to incorporate rich technologies and fuels (including CCS and BECCS). The model features a partially dynamicrecursive equilibrium where the demand and supply of primary energy, agricultural and forest commodities are balanced (market-clearing) globally at every simulated time period. The model also to a large degree features a cost-effectiveness where technologies, fuels, feedstock and modes compete for market shares through a logit function based on costs and preferences. The logit presents a discrete economic choice which avoids 'winner-take-all' (this is different from traditional choices which strictly minimize costs) with one typical implementation as Eq. (1), where s i is the market share of candidate i whose service costs are p i (comprised of energy costs and levelized non-energy costs), b i is the associated share-weight which is used to calibrate historical data and represent future preferences, and r i is the associated logit exponent which is also called as an elasticity. More information on GCAM structure, methodologies, technology assumptions and data sources have been presented by the GCAM team (e.g., McJeon et al., 2014;Muratori et al., 2017;Shukla and Chaturvedi, 2012;Yu et al., 2019) and the GCAM documentation (http://jgcri.github.io/gcam-doc/). Energy service demands on the demand side are a primary driver of the energy system development. GCAM describes the demand side with three aggregate end-use sectors: industry, building and transportation. On top of the standard GCAM, the three end-use sectors (industry, building and transportation) of China have been refined as in the following: • China's industrial structure is complicated. This sector is disaggregated and calibrated from an aggregate sector in the standard GCAM into eleven specific subsectors (iron-steel, aluminum-nonferrous metals, other nonmetallic minerals, paper pulp and wood, chemicals, food processing, other manufacturing, agriculture, construction, mining, and cement) with six intermediate energy services (boiler, process heat, machine drive, electrochemical process, other energy services, and feedstock) (Wang et al., 2016;Zhou et al., 2013). • Regarding several climate conditions and differences between urban and rural areas, China's building sector is represented and calibrated using a combination of climate zones (severe cold, cold, hot summer-cold winter, and hot summer-warm winter) and districts (urban, rural and commercial) with five types of energy services (cooling, heating, lighting, water heating and cooking, and electric equipment) . • According to transport areas, characters and purposes, China's transportation sector is downscaled and calibrated into five passenger (intercity transport, urban transport, rural transport and international aviation for private passengers; and business transport for government bodies and companies) and four freight subsectors (general domestic freight, rural three/four-wheeler freight, international ship freight, and international aviation freight) with specific types of modal services (Pan et al., 2018;Yin et al., 2015). Energy service demands in the three disaggregated end-use sectors will be satisfied by multiple competing technologies and fuels through the logit-sharing pattern. In addition, a cascade of technologies have been also represented for power generation which is the primary energy conversion sector of China, including 10 types of fossil fuel technologies (coal, coal-CCS, integrated gasification combined cycle (IGCC), IGCC-CCS, gas, gas-CC, gas-CC-CCS, liquid, liquid-CC, and liquid-CC-CCS) and 15 types of non-fossil fuel technologies (nuclear-II, nuclear-III, hydro, wind, wind-storage, photovoltaic (PV), PV-storage, PVrooftop, concentrated solar power (CSP), CSP-storage, biomass, biomass-CCS, biomass-CC, biomass-CC-CCS, and geothermal). Key assumptions Energy services demands relate closely with socioeconomic developments (how energy service demands are estimated is given in Appendix A). In this study, China's socioeconomic assumptions until 2100 are presented in Table 1, which take the latest socioeconomic trends such as 'new normal' economy and 'two-child' policy into account. With the assumptions, China's GDP increases by a factor of more than 3, 6 and 19 in 2030, 2050 and 2100, respectively from the 2010 level; China's population peaks in 2030 at 1.43 billion, then decreases to 1.33 billion in 2050 and further decreases to 1.08 billion in 2100; China's urbanization rate increases to 69.9%, 77.6% and 86.1% in 2030, 2050 and 2100, respectively. Based on Wang et al. (2016) and Pan et al. (2017), the assumptions of other key parameters for China, such as demand-price elasticities and techno-economic parameters, are also updated to reflect the recent changes and polices in China. Some details of these parameters can be found in our recent papers (e.g., Pan et al., 2018Pan et al., , 2018aChen et al., 2019) (for the other 31 regions of the world, we maintain the default parameter assumptions in the standard GCAM). In our model, negative emissions of the energy system are achieved via BECCS which is the representation of CDR. CCS is assumed to start to enter the industry (synthetic fuel productions and industrial productions such as cement, steel and iron) and power sectors from 2020 and 2025 onward, respectively. Note that due to the characteristics of GCAM, all these parameters and assumptions are exogenous and will remain unchanged during modeling. Overall, our refinements and calibrations of GCAM for China are characterized by a more detailed sectoral disaggregation, more types of technologies and services, and a more localized parameterization. By doing so, we attempt to better reflect what is happening in China and better illustrate energy services and transitional options of China. Mitigation scenario settings At the global scope, a range of carbon budgets aligning with the Paris goals have been reported by IAM emissions scenarios (e.g., Clarke et al., 2009;Blanford et al., 2014;van Vuuren et al., 2016). These scenarios have been summarized in the fifth Assessment Report (Clarke et al., 2014) and the Special Report on Global Warming of 1.5°C (Rogelj et al., 2018) of the IPCC. Among these scenarios, as a numerical illustration, this paper uses the Representative Concentration Pathway 2.6 (van Vuuren et al., 2011) and the average pathway of the 1.5°C scenarios in Rogelj et al. (2015) as representatives of well-below 2°C (N66% chance) and 1.5°C (N50% chance), respectively. Their global carbon budgets for the energy system (fossil and industrial CO 2 ; LULUCF CO 2 and non-CO2 are excluded in this study) during 2011-2100 are approximately 1000 and 500 GtCO 2 , respectively. Under the two selected global budgets, an allocation based on 'carbon budget account' (BASIC experts, 2011), as indicated in Eq. (2) (where B indicates the global carbon budget during 2011-2100; b i , pop i and his i indicate the carbon budget during 2011-2100, the population in the year 2010 and the historical cumulative CO 2 emissions during 1850-2010, respectively of country i), implies that China needs to at least control its 2011-2100 emissions within 320 GtCO 2 to stay well-below 2°C and within 215 GtCO 2 to stay below 1.5°C. Following the implementation of the NDC, China's CO 2 trajectories before 2030 are assumed to be based on the projections in Pan et al. (2017) and updated with the up-to-date inventory data (UNFCCC, 2019) in this study. With the assumption, China's energy system emissions are projected to be 10.1, 10.4 and 10.6 GtCO 2 by 2020, 2025 and 2030, respectively. This implies almost a plateauing of China's energy system CO 2 emissions during 2020-2030 and is consistent with the trends projected in the first study program of the CEMF (Lugovoy et al., 2018). The study uses the peaking year of CO 2 to represent China's near-term mitigation. Besides 2030, this study further assumes a peaking of 2025 and 2020 with rapid mitigation thereafter. Therefore, in our following analysis, six mitigation scenarios, three for 2°C (2C2030, 2C2025 and 2C2020) and three for 1.5°C (1.5C2030, 1.5C2025 and 1.5C2020), will be developed, simulated and assessed. Intuitively, an earlier peaking indicates more ambitious near-term mitigation. It is important to note that an earlier peaking such as 2025 or 2020 is our theoretical assumption and doesn't represent any attitude, commitment or communication of the government of China (in other words, we don't imply China must ratchet up its NDC in this manner), but reflects some recent analyses of China's CO 2 emissions to align with the Paris goals (e.g., Gallagher et al., 2019;Jiang et al., 2018Jiang et al., , 2018aWang and Chen, 2018). Meeting the Paris-aligned carbon budget over the century needs countries to quickly decline their CO 2 trajectories after reaching the peaking (Raupach et al., 2014). At present, China's post-NDC, longterm mitigation objectives, however, are not yet determined. In this case, in order to align near-term mitigation with long-term goals until 2100, this paper follows Pan et al. (2018) which upgraded the cappedemissions trajectory model proposed in Raupach et al. (2014) to disaggregate China's 2011-2100 carbon budget (320 GtCO 2 for 2°C and 215 GtCO 2 for 1.5°C in this study) into annually exemplary CO 2 trajectories that China could follow, as indicated with Eq. (3), where y t denotes China's annual CO 2 emissions in year t after the peaking, y peak denotes the CO 2 peaking level, and m is a mitigation parameter calibrated by matching China's cumulative emissions with its carbon budget. Particularly, n max is introduced to represent the sustained annual net-negative CO 2 emissions level (Rogelj et al., 2019) that China could achieve in the future through deploying CDR (BECCS in our model) in the energy system. In this study, the value of n max is assumed to be −20% of China's CO 2 emissions in 2010. This number is approximately equivalent to the average global CO 2 emissions in 2100 (as a fraction of the 2010 levels) of our two selected global scenarios. Compared with the negative emissions levels of the existing scenarios in the literature (Clarke et al., 2014;Rogelj et al., 2018), a -level of 20% is moderate and would avoid too extreme CDR. In simulations, the CO 2 trajectories designed from this trajectory model will be coupled into our refined GCAM by assuming them to be China's carbon caps of the energy system. GCAM will search recursively for the price vector so that these carbon caps are met. To control intersectoral carbon leakage, the modeled energy system carbon prices will be imposed to LULUCF CO 2 ; and to try to control inter-regional carbon leakage, the other 31 regions of the world are assumed to jointly meet the remaining global carbon budget. Note that with these carbon caps, China still participates in international trade of primary energy and other commodities in simulations, but has to meet its CO 2 trajectories unilaterally without emissions allowance trade across regions. CO 2 trajectories China's energy system CO 2 trajectories toward the end of the century, subject to our assumptions above, are presented and compared across the six mitigation scenarios in Fig. 1. Note that applying these scenarios as illustrative examples aims to identify key implications of near-term mitigation on China's long-term energy transitions, but doesn't aim to determine which near-term mitigation or CO 2 peaking year is optimal to China. The identified implications are expected to be referred as part of information by decision-makers to formulate the most suitable transitional trajectories for China. Logically, the 1.5°C scenarios require China to achieve more aggressive mitigation than the 2°C scenarios. Under 2°C (1.5°C), China's 2030 emissions decrease to 9.6 (8.9) and 8.6 (7.4) GtCO 2 when the peaking is achieved earlier in 2025 and 2020, respectively. These emissions are approximately 10% (15%) and 20% (30%) lower than the current NDC levels, respectively. As expected, more ambitious mitigation by 2030 alleviates China's medium-to-long-term mitigation paces to align with its carbon budget. China's average mitigation rate during 2030-2050, as a fraction of 2010 emissions, decreases from 4.2%/yr in 2C2030 (6.2%/yr in 1.5C2030) to 3.7%/yr in 2C2025 (4.8%/yr in 1.5C2025) and further to 3.0%/yr in 2C2020 (3.6%/yr in 1.5C2020). Importantly, near-term mitigation has an important implication on the timing of carbon neutrality of China's energy system, which has not been systematically assessed in the prior literature. According to the trajectory model, the current NDC requires achieving the carbon neutrality of China's energy system in 2065 under 2°C and in 2050 under 1.5°C. Our scenarios feature that each five-year earlier peaking of CO 2 could allow almost a five-year later carbon neutrality. Energy system transitions Given a long-term climate goal, our simulations, which seek largely cost-effective options to match the CO 2 mitigation trajectories in Fig. 1, present that near-term mitigation tends to have comparatively small impacts on China's total energy consumption and power generation (Table 2). Across our three 2°C scenarios, in 2050, China's primary and final energy consumption are estimated to be about 170 and 98 EJ, respectively, and power generation presents a swift growth to reach nearly 16 PWh. Compared with 2°C, the 1.5°C scenarios will further reduce China's energy service demands (induced by higher carbon prices) and accelerate end-use electrification. However, the changes of China's total energy consumption and electricity production across our three 1.5°C scenarios are also shown to be small. Due to the resource endowment, while actions have been taken in recent years, China's energy system is still carbon-intensive at present. Coal still accounted for 59% of China's primary energy in 2018, while non-fossil energy only accounted for 14%. To align with the Paris goals, all our scenarios feature a predominant role of non-fossil energy in China's energy transitions and require achieving substantially challenging objectives in long-term, as presented in Fig. 2. By 2050, the non-fossil share of China's primary energy consumption requires exceeding 55% (67%) to align with 2°C (1.5°C), and the coal share requires a dramatic decrease to less than 15% (10%); the electricity supply Note: Non-fossil energy is accounted for using coal equivalent calculation. system requires almost complete decarbonization in all scenarios; the share of electricity in China's final energy consumption, with the increasing electrification in the building, industry and transportation sectors, requires reaching 51% (64%) in 2°C (1.5°C) scenarios. In the second half of the century, with technology progress and more stringent mitigation, fossil options will be continuously substituted by non-fossil alternatives. By 2100, all three 2°C (1.5°C) scenarios show that nonfossil energy and electricity provide 90% (95%) and 75% (80%) of China's primary and final energy consumption, respectively. Our simulation of the current NDC indicates non-fossil energy contributes 21% of China's 2030 primary energy supply, which meets the NDC target. The associated low-carbon electricity share (including renewable, nuclear and fossil-CCS generations here) and end-use electrification rate are projected to be 46% and 28%, respectively in 2030. Compared with the current NDC, intensifying the mitigation before 2030 will to some degree temper the challenge of China to achieve these transitional objectives in the post-NDC period especially between 2030 and 2050. For instance, different peaking years of CO 2 , while barely affect total energy consumption, present implications on the pace at which China needs to develop non-fossil energy to substitute fossil energy. To follow 2°C (1.5°C) mitigation trajectories, during 2030-2050, the current NDC requires an average increase of the non-fossil share in China's primary energy supply by 1.8%/yr (2.6%/ yr), which could be lowered by 0.2%/yr (0.4%/yr) with each fiveyear earlier CO 2 peaking. Enabling more ambitious 2030 mitigation in China than the NDC depends primarily on a scaling-up of non-fossil energy development to replace coal in the coming decade. In 2030, the share of non-fossil increases to 24% in 2C2025 (26% in 1.5C2025) and 28% in 2C2020 (30% in 1.5C2020) (30% is a very optimistic estimation of China's 2030 non-fossil share by Climate Action Tracker (CAT, 2019)). In contrast, the share of coal in 2030 reduces from 47% associated with the NDC to 43% in 2C2025 (40% in 1.5C2025) and further to 39% in 2C2020 (33% in 1.5C2020). The scaling-up of non-fossil energy will be in parallel with a rapid ramping-up of low-carbon electricity production and end-use electrification. For instance, a peaking of CO 2 in 2C2025 (1.5C2025) implies low-carbon generations promoted to 51% (56%) of China's electricity supply in 2030. Encouragingly, facilitating factors are happening in China in the recent years. The costs of solar and wind energy have declined rapidly; the government is issuing some useful plans and measures such as the industrial structure upgrade, the national emissions trading scheme, the renewable portfolio standard, and the more stringent building codes and transport emissions standards. In 2017, China's renewable energy investments reached 126.6 billion dollars and accounted for 45% of the global total renewable energy investments (Frankfurt School-UNEP Centre/BNEF, 2018). However, making a high penetration of non-fossil energy viable in China in near-term still poses critical challenges to some fundamental supporting factors, such as large-scale energy storage, advanced power transmission and distribution network, smart grid, and phasing-out of the existing conventional coal power plants. Beyond current policies, these factors call for significantly more measures, projects and investments in China immediately. Dependence on carbon removal At the global scope, a range of papers (e.g., Clarke et al., 2014;Pan et al., 2018a;Rogelj et al., 2018) have highlighted that realizing Paris- Note that the precise numbers here relate with modeling assumptions including the use of a specific sustained annual net-negative CO 2 level. aligned energy transitions relies on applying CCS and CDR to remove CO 2 emissions from the energy system. Besides a strong scaling-up of non-fossil energy, all our scenarios also feature an extensive and unavoidable requirement of CCS and CDR for China to realize its longterm energy transitions, as presented in Fig. 3. Although an earlier peaking of CO 2 slightly increases (decreases) the deployment of CCS in the first (second) half of the century, the total emissions stored by CCS are projected to be around 180 (195) GtCO 2 across all three 2°C (1.5°C) scenarios over the century, which are equivalent to 21 (23) years of China's 2010 CO 2 levels. Regarding the infrastructure life span, the existing coal power plants should be quickly equipped with CCS. In our scenarios, all conventional coal power plants without CCS are required to be fully phased out by 2055 to stay well-below 2°C and by 2045 to stay below 1.5°C. According to Fig. 3b, BECCS is presented to enter China's energy system (largely in refining biofuel and producing bio-power) mainly after 2050 when its costs become economically competitive (due to high carbon prices associated with reaching very low and even negative emissions). With the completion of the NDC, 61 (91) GtCO 2 are estimated to be offset from China's energy system via BECCS over the century to align with 2°C (1.5°C). By assessing the potentials of saline aquifers, oil and gas reservoirs and coal seams, China's optimistic storage capacity is estimated to be possibly over 1500 GtCO 2 (Höller and Viebahn, 2016;Sun et al., 2018). Although this optimistic capacity sustains the CCS requirement here, a wide application of BECCS will be practically afflicted with risks. Besides technological and economic issues, it also raises public concerns on food security, biodiversity and other sustainable development goals (van Vuuren et al., 2017). We therefore suggest the government of China to make preparations in advance so that sociopolitical, technological and economic barriers of the CDR development could be gradually removed. To align with the Paris goals, our scenarios highlight that to what degree China deploys CDR is highly closely related with the near-term mitigation achieved by 2030. Intensifying near-term mitigation could significantly alleviate the risk of the longterm CDR deployment for China. Compared with the NDC, BECCS over the century is lowered by 17% in 2C2025 (13% in 1.5C2025) and even by 32% in 2C2020 (25% in 1.5C2020). Note that China's CO 2 emissions in 2050 of the 2°C scenarios in the study (Fig. 1) are at the lower end of the range of emissions in the prior 2°C scenarios of China (about 3.0-6.5 GtCO 2 ) (e.g., Chen et al., 2016;Jiang et al., 2018a;Li et al., 2019). Hence, the prior scenarios are likely to require China to deploy more CDR in the post-2050 period than projected here to support the final achievement of the Pairs goals. Mitigation costs The results above quantified some key transitional benefits of intensifying near-term mitigation (note that this is not equivalent to saying that China's current NDC is not ambitious; the discussions of the NDC fairness and adequacy are beyond the purpose of the paper) to China's Paris-aligned energy transitions beyond 2030. According to Fig. 4a, our scenarios further indicate that intensifying near-term mitigation also tends to have economic benefits to China's transitions throughout the century, especially when decision-makers discount future mitigation costs at no more than 6%/yr under 2°C (7%/yr under 1.5°C). Following the current NDC, China's aggregate mitigation costs between now and 2100, with a discount of 5%/yr (used in the IPCC reports to obtain netpresent mitigation costs), are projected to be 3.23% of GDP in 2C2030 and 5.60% in 1.5C2030. Intensifying near-term mitigation reduces these costs to 3.10% in 2C2025 (5.28% in 1.5C2025) and 3.05% in 2C2020 (5.07% 1.5C2020). Note that the mitigation costs assessed here are direct investment, operation and maintenance costs on mitigation measures and estimated as the area under marginal abatement cost curves, which don't include mitigation benefit or co-benefit. Incorporating (co-)benefits such as improved air quality and public health would further hedge the mitigation costs resulting from mitigation in an early stage (Li et al., 2019). Overall, our scenarios highlight that intensifying near-tem mitigation is most likely to have not only transitional but also economic benefits to China's long-term energy transitions for aligning with the Paris goals. These benefits suggest China to try its best to pursue more ambitious near-term mitigation including an earlier CO 2 peaking, in accordance with its latest national circumstances and development needs. However, enabling more ambitious mitigation than the NDC appears costly for this developing country in the coming decade (Fig. 4b). Implementing the current NDC indicates China's mitigation costs between now and 2030 are equivalent to 0.42% of GDP (discounted at 5%/yr). These costs increase by a factor of 1.3 in 2C2025 (1.6 in 1.5C2025) and even 1.9 in 2C2020 (3.2 in 1.5C2020). In 2030, the carbon prices associated with 2C2025 (1.5C2025) and 2C2020 (1.5C2020) are estimated to be 1.4 (1.8) and 1.9 (2.3) times as high as with 2C2030 (1.5C2030), respectively. Several general equilibrium exercises in the literature even estimated that an earlier peaking of CO 2 might lead to an over 2% loss of GDP in China before 2030 (Duan et al., 2018;Mi et al., 2017). As a developing country, in the following decade, China will still need substantial investments to accelerate economic growth, social development and poverty eradication. To enable more ambitious mitigation by 2030 in China, which will make valuable contributes to narrowing the 2030 global emissions gap, international supports (e.g., financial transfers from the Green Climate Fund) and cooperation (e.g., a regional emissions trading scheme, a sustainable development mechanism) are essential and expected immediately. In the framework of the Paris Agreement, we emphasize that the concept of ratcheting up the NDCs should broadly include mitigation, finance, technology and capacity-building supports rather than narrowly indicate mitigation. Conclusions To keep the door open for aligning with well-below 2°C or 1.5°C, the intergovernmental climate change negotiations are appealing to all governments to ratchet up the current NDCs. Informing the implications of near-term mitigation on long-term energy transitions is useful to support China's re-consideration of the NDC and preparation for a lowcarbon transformation. The existing literature didn't assess these implications. This paper filled in the literature gap by using an integrated assessment model with six representative mitigation scenarios. Key indicators of China's energy transitions under the six scenarios are summarized and compared in Table 3. The Paris goals require achieving substantially challenging objectives in China's long-term transitions. In 2050, for aligning with 2°C (1.5°C), non-fossil energy accounts for over 55% (67%) of China's primary energy consumption; electricity supply is almost fully decarbonized; end-use electrification rate reaches 51% (64%). Intensifying near-term mitigation to some degree attenuates the stringency of realizing these transitional objectives in the period of 2030-2050. Reaching an earlier CO 2 peaking with rapid mitigation thereafter allows a later carbon neutrality and reduces the reliance on CDR in China. Importantly, intensifying near-term mitigation also tends to reduce China's aggregate mitigation costs across the century. Therefore, China could consider reconciling mitigation and development by trying its best to pursue more ambitious near-term mitigation based on its national circumstances and development needs. Enhanced mitigation would not only provide long-term transitional and economic benefits to the country itself but also contribute to narrowing the 2030 global emissions gap. In the coming decade, in order to enable more ambitious mitigation, China requires further boosting the penetration of non-fossil energy to replace coal. Accordingly, China could more aggressively reinforce policies, mechanisms and investments in renewables and their supporting technologies (e.g., large-scale energy storage, advanced power transmission and distribution network, and smart grid) during the 14th and 15th Five-Year Plan periods. Measures aimed at ramping up lowcarbon electricity production and end-use electrification, such as banning the construction of new coal power plants, fostering CCS in the existing power plants, promoting the replacement of coal by electricity in the industry sector, improving electric vehicles in the transportation sector, and popularizing green dwellings in the building sector, must be also accelerated. Implementing these policies and measures needs substantial costs in the years 2020-2030. Compared with the NDC, a ten-year earlier peaking of CO 2 with rapid mitigation thereafter might double China's mitigation costs in the coming decade under 2°C and even triple under 1.5°C. As a developing countries, besides domestic efforts, international finance supports and cooperation are of significant value in enabling more ambitious mitigation in China by 2030 without compromising its sustainable development. In addition, our scenarios highlighted that CCS and CDR would play a crucial role in China's long-term energy transitions. Although they have not received sufficient attention in China until now, CCS and CDR should be included in China's future energy development strategies and require targeted investments and cultivations. Our study has limitations. First, the results are subject to specific global carbon budgets and the 'carbon budget account' allocation. Using other global carbon budgets and allocations will change China's budgets. However, under the Paris goals, the basic implications of different budgets on China's energy transitions are expected to be similar, because the allocated budgets are really stringent compared with China's emissions levels (Pan et al., 2017). Second, the results are also subject to specific assumptions of technologies. Including new promising options such as direct air CCS, more advanced hydrogen and nuclear technologies is expected to provide some flexibilities to China's long-term transitions. Finally, ratcheting up the NDC is a systematic decision which needs comprehensive information to support. The study only provided supporting information from the perspective of energy transitions. Assessing how the achievement of near-term mitigation relates with other aspects, such as the economy, society, employment, and development environment (e.g., the 2019 novel coronavirus), is also important. Acknowledgement This work was supported by the National Natural Science Foundation of China (71703167, 71690243, 71874096, 71774171, 71904201) and the Science Foundation of China University of Petroleum, Beijing (2462020YXZZ038). Appendix A. Estimation of energy service demand in GCAM In the modeling framework, end-use energy service demands in the future, as a primary driver of the energy system development, are Note: For the first six indicators, the 2030/2050/2100 values are presented. 'CCS/BECCS' is for the years 2020-2100. '2020-2030/2020-2100 mitigation costs' here correspond to a discount of 5%/yr. assumed to be driven by socioeconomic developments over time through income, price and preference. Overall, industrial service demands (D t ) are projected using Eq. (A-1), where p t denotes the energy service price, g t denotes per capita GDP, N t denotes the total population, k denotes a calibration factor, and α and β denote elasticities. For the transportation sector, passenger transport service demands are also projected using Eq. (A-1), and freight service demands are projected using Eq. (A-2) where G denotes GDP. In the building sector, energy service demands are not projected using the elasticity, but are projected by considering the saturations of per capita floorspace and per unit floorspace service (Ecom et al., 2012;Shi et al., 2016). Per capita floorspace (s t ) is projected using Eq. (A-3), where s s denotes the satiated level, s m denotes the minimum level, and s i denotes the saturation impedance of floorspace. Per unit floorspace energy service demands (d t ) are largely projected using Eq. (A-4), where d s denotes the satiated level (for cooling and heating, d s further considers other factors mainly including cooling and heating degree days, building shell efficiency, and internal gains (Yu et al., 2014)), and d i denotes the saturation impedance of service. More detailed calculations can be found in our previous papers (e.g. Chen et al., 2019;Pan et al., 2018Pan et al., , 2020Zhou et al., 2013). Appendix B. Supplementary data The standard GCAM, the capped-emissions trajectory model and the data supporting the figures can be found online. Other data is available from the corresponding author upon reasonable request. Supplementary data to this article can be found online at doi:https://doi.org/10. 1016/j.eneco.2020.104865.
2020-07-13T13:05:33.483Z
2020-07-13T00:00:00.000
{ "year": 2020, "sha1": "5b4c80299012ce61c05581613d940c5251f176ee", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.eneco.2020.104865", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bca21e765f1b4f16377246e91e9ed5557e5c055f", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Medicine", "Business" ] }
13393984
pes2o/s2orc
v3-fos-license
Limited complementarity between U1 snRNA and a retroviral 5′ splice site permits its attenuation via RNA secondary structure Multiple types of regulation are used by cells and viruses to control alternative splicing. In murine leukemia virus, accessibility of the 5′ splice site (ss) is regulated by an upstream region, which can fold into a complex RNA stem–loop structure. The underlying sequence of the structure itself is negligible, since most of it could be functionally replaced by a simple heterologous RNA stem–loop preserving the wild-type splicing pattern. Increasing the RNA duplex formation between U1 snRNA and the 5′ss by a compensatory mutation in position +6 led to enhanced splicing. Interestingly, this mutation affects splicing only in the context of the secondary structure, arguing for a dynamic interplay between structure and primary 5′ss sequence. The reduced 5′ss accessibility could also be counteracted by recruiting a splicing enhancer domain via a modified MS2 phage coat protein to a single binding site at the tip of the simple RNA stem–loop. The mechanism of 5′ss attenuation was revealed using hyperstable U1 snRNA mutants, showing that restricted U1 snRNP access is the cause of retroviral alternative splicing. INTRODUCTION Alternative splicing extensively expands the human transcriptome (1) and proteome, giving rise to roughly 100 000 protein isoforms from only 25 000 genes (2). In retroviruses a single pre-mRNA corresponding to the complete genome also undergoes alternative splicing to express all viral genes (3). The splicing reaction is executed by the spliceosome (4). The core of the spliceosome consists of five small nuclear RNPs (U snRNPs; 5), some of which participate in splice site recognition via RNA:RNA interactions (6,7). The first step towards mRNA splicing is the recognition of the 5 0 ss by the free, complementary 5 0 end of U1 snRNA (8,9). Therefore, the hydrogen bonding pattern between all 11 nt of the 5 0 ss and U1 snRNA determines the intrinsic strength of the 5 0 ss and thus contributes to its recognition and frequency of usage, creating a first layer of regulation (10,11). In contrast to yeast, where almost all splice sites match the consensus sequence (12), splice sites in retroviral and mammalian genomes are much more degenerated and recognition is frequently assisted by a number of splicing regulatory proteins (13). Accordingly, regions in proximity to splice sites often represent exonic or intronic splicing enhancers (ESE, ISE; 14), or silencers (ESS, ISS; 15,16). These elements modulate the intrinsic strength of splice sites mostly via recruitment of splicing factors like SR proteins or hnRNPs (17). The efficiency of splice sites can also be modulated by RNA structure (18,19). Either folding of the structure competes directly with formation of the U1 snRNA:5 0 ss RNA duplex or indirectly by masking binding sites of splicing regulatory proteins (20). Finally, transcriptional elongation can also regulate alternative splicing, illustrating the close connection between splicing and transcription (21). Retroviruses represent very valuable model systems for studying alternative splicing (22). While they synthesize only one polycistronic primary transcript, which undergoes alternative splicing for full viral gene expression ( Figure 1A; 23), retroviruses also need to tightly control the use of their splice sites to ensure optimal levels of unspliced versus spliced RNAs (3). The unspliced or genomic RNA is packaged into progeny virus and also serves as a translation template for the structural and enzymatic proteins Gag and Pol, whereas the spliced RNA encodes the envelope protein (Env; Figure 1A). The correct ratio of Gag and Env partly determines viral infectivity. HIV splicing is highly regulated by ESEs or ESSs [ (24)(25)(26); reviewed in (27)] and by weak polypyrimidine tracts (PPTs), which are interrupted by weakening purines (28). In one case reported, a PPT is additionally attenuated by a secondary structure (29). For simple retroviruses such as Rous sarcoma virus (RSV) or murine leukemia virus (MLV), alternative splicing has also been attributed to weak 3 0 ss (30)(31)(32). In addition, RSV harbors a decoy 5 0 ss, which redirects splicing activity from the authentic 5 0 ss to a nonproductive one (33). We could previously show that in MLV the 5 0 ss instead of the 3 0 ss is negatively regulated via upstream sequences, which can form a secondary structure. Moreover, the stability and integrity of this structure correlates with 5 0 ss attenuation (34). We now demonstrate that the restriction exerted by this upstream RNA secondary structure depends on limited complementarity between U1 snRNA and the 5 0 ss at position +6. We show that the RNA secondary structure-mediated U1 snRNA restriction could be counterbalanced by either increasing complementarity to U1 snRNA or SR protein-mediated stabilization of the 5 0 ss:U1 snRNA duplex. The latter was accomplished by an improved MS2-tethering system, which exerts a high affinity to a single stem-loop binding site. A heterologous RNA stem loop of comparable thermodynamic stability could replace the wildtype structure and preserve the splicing pattern. Interestingly, low complementarity to U1 snRNA affects splicing only in the context of the secondary structure, arguing for a dynamic interplay between structure and intrinsic strength of the 5 0 ss. U1 snRNA mutants were cloned by PCR using a reverse primer (rv: 5 0 -CGC GGA TCC TCC ACT GTA GGA TTA AC-3 0 ) including a BamHI restriction site and different forward primers containing the U1 mutations flanked by a BglII restriction site (fw U1G11C: 5 0 -GCC CGA AGA TCT CAT ACT TAC CTC GCA G-3 0 ; fw U1 perfect: 5 0 -GCC CGA AGA TCT CCA GCT TAC CTC GCA G-3 0 ). The PCR products were cloned as a BamHI/ BglII fragment into the pUC19 U1wt plasmid (kind gift from A. Weiner, Seattle, WA, USA). Cells, transfections and virus titer 293T cells were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum, 1 mM sodium pyruvate and 1% penicillin/streptomycin. The day before transfection, 4 Â 10 6 293T cells were seeded in a 10-cm plate. Transfections were performed using the calcium phosphate precipitation method with 5 mg of the SF91 plasmid (10 mg for proviral constructs). Medium was changed 8 h post-transfection, and the cells were harvested 48 h later. Transfection efficiency was measured by FACS analysis and ranged between 40% and 60% (FACSCalibur; Becton-Dickinson, Heidelberg, Germany). The co-transfection assays (MS2 and U1) were performed using 5 mg SF91 plasmid and 10 mg U1 plasmid or 5 mg MS2-plasmid. HeLaP4 cells (40) were cultured under conditions identical to those for 293T cells. The day before transfection, 6 Â 10 5 HeLaP4 cells were seeded in a 6-cm plate. Cells were transfected using the ICAFectin TM 441 DNA transfection reagent (Eurogentec, Brussels, Belgium) and 2.75 mg plasmid DNA. RNA was harvested 36 h later. For virus production, 293T cells were transfected with 10 mg of proviral plasmids. Supernatants were collected 36 h and 48 h after transfection and passed through a 0.22 mm filter (Millipore, Schwalbach, Germany). Initial titers were determined by transduction of 1 Â 10 5 SC-1 cells (murine fibroblasts) using serial dilutions in the presence of 4 mg/ml protamine sulfate. Cells were centrifuged at 950 g and 258C for 60 min and incubated for not more than 24 h to avoid re-infection. Cells were harvested and GFP-positive cells were counted by flow cytometry. GFP is encoded in frame with the MLV env ORF (38). SC-1 cells were then infected with a multiplicity of infection of 0.3. The spreading infection was monitored using GFP fluorescence and supernatants were collected at 90% GFP-positive cells. The supernatants were re-titrated on SC-1 cells to determine the viral titers after replication in murine cells. RNA preparation and analysis Preparation of total RNA, gel electrophoresis, blotting and detection with a radiolabeled probe were performed as described previously (41). The eGFP-specific probe corresponds to the eGFP cDNA and was generated by digestion of the SF91 plasmid with AgeI and EcoRI. To detect 18S rRNA, a genomic fragment was PCR-amplified and subcloned into pCR2.1 (Invitrogen, Karlsruhe, Germany). The probes were radiolabeled using the DecaLabel Kit (Fermentas, St. Leon-Rot, Germany). RNA was quantified photometrically and 10 mg were used for northern blot analysis, if not stated otherwise. Phosphoimager analysis The different RNA species were quantified by phosphoimager analysis (Storm 820; GE Healthcare, Chalfont St. Giles, UK) using a short and a longer exposure. Only experiments where the fold enhancement or splicing ratios remained constant over both exposure times were processed. The percentage of unspliced RNA was calculated using the following formula: [unspliced RNA/(spliced RNA + unspliced RNA)] Â 100 = % unspliced RNA. Reverse transcription and quantitative real-time PCR analysis For reverse transcription, 5 mg total RNA were digested using the Ambion TURBO TM DNase (Austin, TX, USA). 500-800 ng RNA were reverse transcribed using the QuantiTect Reverse Transcription Kit (Qiagen, Hilden, Germany). Quantitative PCR was performed on a Stratagene Mx 3000P (La Jolla, CA, USA) using the QuantiFast SYBR Green (Qiagen) PCR Kit. To detect the unspliced RNA, fw primer 5 0 -GAG GGT CTC CTC AGA TTG ATT GAC-3 0 and rv primer 5 0 -GAC AGA CAC GAA ACG ACC GC-3 0 were used. To detect the spliced RNA, the fw primer was combined with an exon junction primer: 5 0 -TGT AAG TGA GCT CCC GGC-3 0 . As a standard we used a U1 snRNA primer set (fw: 5 0 -CTT ACC TGG CAG GGG AGA TAC-3 0 and rv: 5 0 -GAA AGC GCG AAC GCA GTC-3 0 ). Electromobility shift assay Using SF91stem-loop and stem-loop antisense templates ( Figure 3A), PCR products were generated carrying a T7 promoter. The products were ligated into pCR2.1. T7 transcripts correspond exactly to the sequence depicted in Figure 3A. The resulting plasmids were linearized and approximately 500 ng were used for in vitro transcription using Ambion's Maxiscript Kit in the presence of 50 mCi of [ 32 P]UTP (400 Ci/mmol, Hartmann Analytic, Braunschweig, Germany) and unlabeled UTP to a final concentration of 40 mM. Free nucleotides were removed by MoBiTec S300 columns and probes were purified using denaturing PAGE. The radiolabeled RNA probe (25 000 c.p.m.) was incubated in 25 mM Tris-HCl, pH 7.9, 5 mM MgCl 2 , 10% (v/v) glycerol, 0.4 mM dithiothreitol, 0.5 mM EDTA, 10 U RNAsin (Promega), 4 mg BSA and 1 mg of yeast tRNA in a total volume of 15 ml for 30 min at 208C to allow folding of the RNA structure followed by denaturing for 2 min at 908C. Purified U1 snRNP was purchased from Phadia (Freiburg, Germany). The purification procedure was performed using the original protocol from the Lu¨hrmann laboratory (42). U1 snRNP was added 10 min prior to loading on a 6% (60:1) polyacrylamide gel run in Tris-borate-EDTA buffer. The presence of U1 snRNA in the U1 snRNP fraction was assessed using RT-PCR along with a U1 plasmid as a control (Supplementary Figure 2). Increasing 5' splice site strength partially relieves the restriction exerted by the secondary structure In order to reveal the molecular requirements for the previously described RNA secondary structure-mediated restriction of the murine leukemia virus (MLV) 5 0 ss, we used our previously described splicing reporter SF91 (34). This retroviral vector is derived from an MLV provirus and contains the packaging signal (c) embedded in an intron flanked by the authentic MLV 5 0 and 3 0 splice sites followed by eGFP as a marker gene ( Figure 1A; 35). All reporter constructs were transiently transfected into 293T cells and total RNA was analyzed by northern blotting and quantified using a Phosphoimager. In addition, we independently assessed the splicing ratios by quantitative RT-PCR using the same RNAs. We now considered whether the intrinsic strength of the 5 0 ss or its complementarity to U1 snRNA might contribute to the splicing regulation exerted by the secondary structure. Therefore, we converted position +6 of the MLV 5 0 ss from C to U ( Figure 1B), thereby increasing the complementarity from an HBond score of 17.1 to 19.1. The C6U mutant displayed a 2.5-fold enhancement of splicing in the context of the secondary structure ( Figure 1C and D; lanes 1 and 2). In comparison, mutations which exclusively prevent stem formation led to an 8-fold enhancement of splicing (sm3, Figure 1B; Figure 1C and D; compare lanes 2 and 3). These results confirm that, in a less structured region, fewer complementary nucleotides to U1 snRNA are sufficient to constitute an efficient 5 0 ss. A combination of the two mutations led to an almost complete lack of unspliced RNA ( Figure 1C, lane 4). As shown in Figure 1D, the phosphoimager data are highly comparable to the qRT-PCR, although the latter tend to yield slightly higher values for the unspliced RNA in general ( Figure 1D, black bars). Therefore, we continued to use northern blotting in the subsequent experiments to be able to detect cryptic splicing events as a possible result of the introduced mutations. To investigate whether increasing the 5 0 ss:U1 snRNA complementarity could also relieve the restriction of a thermodynamically even more stable RNA secondary structure, we used a deletion of the primer binding site ( Figure 1B), which folds into a more stable stem-loop (ÁG = -68.5 kcal/mol in comparison to -61.4 kcal/mol for SF91 wt) and shows a stronger attenuation of the 5 0 ss (dPBS; Figure 1C, lane 5). Also in this context, the C6U mutation enhanced splicing ( Figure 1C, lane 6; and Figure 1D). Both the C6U and dPBS mutations seem to influence the transcript levels of the SF91 RNA ( Figures 1C and 3). In summary, regulation at the 5 0 ss occurs on two layers: the primary splice site sequence and an upstream secondary structure. The regulation of splicing is conserved in a replicationcompetent provirus In order to confirm that splicing in the provirus is also regulated on two layers, secondary structure and primary 5 0 ss sequence, we cloned two key splicing mutations, namely sm3 (structural mutant; Figure 1C, lane 3) and C6U (splice site mutant; Figure 1C, lane 2), into an infectious MLV provirus ( Figure 1A). Both types of mutations yielded an enhancement of splicing identical to that seen in the splicing reporter (Figure 2A). Even though the provirus splices less efficiently than the reporter, the magnitude of splicing enhancement is maintained ( Figure 2C). Oversplicing of the genomic RNA should lead to reduced levels of full-length RNA to be packaged into viral progeny and to lower amounts of structural proteins since they are translated only from the unspliced RNA. Western blot analysis of cell lysates harvested from 293T cells revealed a correlation between the splicing ratio and the amount of capsid (p30) protein ( Figure 2B). We also tested whether these mutations hamper viral replication. Indeed, replication of the sm3 provirus in murine fibroblast cells (SC-1) led to an almost 4-fold decrease in viral titer compared to wildtype MLV ( Figure 2D). The C6U mutant showed only a modest titer reduction in agreement with the viral RNA and protein analysis. Thus, there is a correlation between the extent of splicing and viral fitness. To sum up, the regulation of alternative splicing is conserved in the context of the complete provirus and mutations that severely affect splicing ratios impede viral replication. The attenuating effect of the secondary structure is transferable to a heterologous 5' splice site Next we asked: did the inhibitory secondary structure evolve and function only in murine leukemia virus? In addition, we wanted to rule out the possibility that sequences far downstream of the 5 0 ss are involved in its attenuation. For this purpose, we used the previously described HIV-NLenv system ( Figure 3A; 44). Briefly, this subviral envelope expression system is based on the HIV-1 proviral clone NL4-3. Using the Tat-independent CMV promoter instead of the viral LTR (NLCenv), the construct expresses a sequence-identical HIV env mRNA, which can be alternatively spliced to yield nef mRNA ( Figure 3A; 37). RNA analysis showed that NLCenv displays alternative splicing due to the weak 3 0 ss upstream of nef ( Figure 3C, lane 2; 20,44). In order to transfer the splicing regulation from MLV to a heterologous 5 0 ss, we inserted the 5 0 leader sequence of MLV (bases 1-203; Figure 1B) immediately upstream of the HIV 5 0 ss, thereby creating SCSenv (light grey box, Figure 3A). SCSenv expresses an MLV/HIV fusion transcript containing the MLV-derived secondary structure ( Figure 1B) followed by the HIV 5 0 ss and downstream env mRNA sequences. The proposed secondary structure of the fusion transcript is depicted in Figure 3B. The MLV RNA stem-loop may also force the HIV 5 0 ss into an inhibitory conformation. In addition, we transferred two structural mutations (sm3 and dPBS; Figure 1C) to SCSenv. For this experiment, we used HeLaP4 cells, in which the NLenv system was established (44), instead of 293T cells. Transfection into HeLaP4 cells revealed that the MLV-derived sequences can also strongly attenuate the HIV 5 0 ss ( Figure 3C, compare lanes 2 and 3). The level of unspliced RNA was enhanced 2.5-fold ( Figure 3D) and exceeded the level of 5 0 ss attenuation observed for the basic construct SF91 containing the identical leader sequence ( Figure 1C and D). Comparison of the two 5 0 ss using the HBond score algorithm (45) revealed that the HIV 5 0 ss is intrinsically weaker than MLV (HIV HBond score 15.7; MLV HBond score 17.1). This is in agreement with our hypothesis that a 5 0 ss with lower complementarity should be even more prone to attenuation via the RNA secondary structure. Moreover, the mode of splicing regulation could be transferred onto a heterologous 5 0 ss since the sm3 mutant is spliced more efficiently and a deletion of the PBS leads to more unspliced RNA ( Figure 3D, compare columns 3 and 4). Furthermore, no downstream MLVderived sequences were involved in 5 0 ss attenuation. We noted that the sm3 mutant did not splice to the same extent as NLCenv, although the 5 0 ss should be accessible to the spliceosome ( Figure 3C and D). However, this can be explained by a purine-rich splicing enhancer present only in the HIV leader sequence upstream of the 5 0 ss (39,46). In addition, the degree of splice site attenuation influenced the total amount of RNA ( Figure 3C). The dPBS mutant in particular displayed the highest level of unspliced RNA, but the lowest amount of RNA ( Figure 3C, lanes 5 and 6). In order to visualize the ratio of unspliced vs. spliced RNA, this part of the northern blot had to be overexposed ( Figure 3C, lane 6). The experiments using the HIV env expression system demonstrate that the splicing regulation observed is not restricted to MLV, but can be transferred to a heterologous 5 0 ss. Replacement of the RNA structure with random sequences capable of stem formation results in proper splicing regulation In order to differentiate whether mere structural requirements of the leader region or specific sequences harboring splicing regulatory protein binding sites cause 5 0 ss attenuation, we replaced the upper part of the RNA structure with a heterologous stem-loop harboring the ability to form a stem with free energy similar to that of the wild type ( Figures 1B and 4A; SF91: ÁG = -49.6 kcal/ mol; heterologous stem-loop: ÁG = -47.2 kcal/mol). RNA analysis of transient transfection of the stem-loop construct revealed a splicing pattern almost identical to that of the wild-type reporter ( Figure 4B, lanes 1 and 2). Thus, 5 0 ss attenuation can be attributed to RNA stemloop formation and not to the primary sequence, which forms the structure. As a control, we reversed the descending part of the stem, resulting in a stem-loop antisense (as) construct, which is unable to form the secondary structure upstream of the 5 0 ss. This construct displayed complete splicing ( Figure 4B, lane 3). In contrast, strengthening the stem by a 20-bp extension on either side (ÁG = -68.8 kcal/mol) led to more unspliced RNA ( Figure 4B and C; lane 4), reminiscent of the deletion of the PBS ( Figure 1C, lane 5). Furthermore, the C6U mutation enhances splicing as in the wild-type context, suggesting that this interplay does not require specific cellular proteins binding to the MLV stem loop ( Figure 4B, lane 5). In addition, we noted that, similar to the SCSenv plasmid ( Figure 3C), the splicing efficiency correlated with the overall transcript levels ( Figure 4B, compare lanes 3 and 4). Therefore, the lane containing the stem-loop antisense (as) was underloaded for proper visualization ( Figure 4B, lane 3). The antisense mutant showed that the 5 0 ss could function highly efficiently in the absence of the stem-loop despite the mismatch at position +6. This position became only critical in conjunction with the secondary structure, pointing to a novel dynamic interplay between structure and primary splice site sequence. SR protein domains targeted to the heterologous stem-loop partially overcome 5'ss attenuation It has been shown in yeast that 5 0 and 3 0 ss mutations can be rescued by SR proteins even though Saccharomyces cerevisiae does not code for such proteins (47). This hints at a mechanism where SR proteins might stabilize RNA:RNA interactions at weak splice sites. Since the secondary structure may restrict access to U1 snRNP, we anticipated that targeting an SR protein into the vicinity of the 5 0 ss would overcome this attenuation by supporting 5 0 ss:U1 snRNA duplex formation. Tethering of protein domains to an RNA of interest became possible with the MS2-fusion tethering system (48,49). The coat protein of the MS2 phage binds with high affinity to its target sequence, which is a short RNA stem-loop ( Figure 5A). However, due to structural constraints, our heterologous RNA stem-loop accommodates only one MS2 binding site. We therefore used the ÁFG variant of the MS2 coat protein, which leads to an increase in RNA binding affinity at the expense of dimer:dimer formation (50). This modification should theoretically result in enhanced binding to a single site in vivo ( Figure 5A). In a co-transfection assay, we tested activation domains (RS domains) of various SR proteins fused to MS2ÁFG along with the SF91 construct and two RNA stem-loop variants harboring a single MS2 binding site ( Figure 5B). Transfection of SF91 and a plasmid encoding the RS domain of SRp55 fused to MS2ÁFG is not neutral, as observed for SF91, and leads to a slight statistically significant enhancement of splicing ( Figure 5B, lanes 1 and 2 and Figure 5C, P-value = 0.03). However, in the context of the RNA stem-loop and the extended stem-loop construct, co-transfection of the MS2-SRp55 plasmid led to a highly significant enhancement of splicing ( Figure 5B and C, compare lanes 3-6; P-value = 0.009 and 0.002). In general, these experiments proved for the first time in vivo that MS2ÁFG mutants can be targeted to a single binding site with reasonable efficiency. Moreover, tethering of an RS domain to the heterologous RNA stem-loop partially overcomes 5 0 ss attenuation, in agreement with previous results that the RNA structure may restrict access to U1 snRNP. 5' splice site attenuation can be rescued by hyperstable U1 snRNA suppressor mutants In addition to protein:protein contacts of U1 snRNP with exonic or intronic sequences (51,52), recognition of 5 0 ss is initiated by RNA:RNA interactions (9,53). Therefore, the hydrogen bonding pattern between U1 snRNA and the 5 0 ss is critical and can either be enhanced by mutations within the 5 0 ss (C6U mutation, Figure 1C, lane 2) or by overexpression of U1 snRNA suppressor mutants that increase the complementarity to a given 5 0 ss (9,54). In yeast, hyperstable U1 snRNA mutants cannot be displaced by U6 snRNA and therefore splicing is inhibited (55). In contrast, in mammalian cells, an extended U1 snRNA/5 0 ss interaction does not decrease splicing efficiency, but rather increases 5 0 ss recognition (56,57). We constructed two U1 snRNA mutants containing one substitution (U1 G11C) leading to eight complementary base pairs with the MLV 5 0 ss ( Figure 6A; HBond score 18.8) or a perfect match of U1 snRNA leading to 11 continuous base pairs (U1 perfect; Figure 6A; HBond score 23.8). Overexpression of these suppressor mutants led to an increase in splicing depending on the intrinsic strength of the RNA duplex ( Figure 6B and C). An increase in complementarity of one base pair (U1 G11C mutant, Figure 6A, lower panel) already led to an enhancement of splicing ( Figure 6C). This argues for a dynamic balance between secondary structure and accessibility of the 5 0 ss to U1 snRNA. As a control, we co-transfected the structural sm3 mutant along with the U1 perfect suppressor snRNA and observed no change in the enhancement of splicing (Supplementary Figure 1). In addition, we looked at direct interaction between U1 snRNP and the 5 0 ss in vitro by EMSA. In order to minimize unspecific protein:RNA interactions, we used the heterologous stem-loop and the antisense mutant thereof as depicted in Figure 4A. We first established conditions that permit folding of the secondary structure, but not of the antisense mutant ( Figure 6D, lanes 1 and 2). Under denaturing conditions, both RNAs showed a similar migration behavior (data not shown). Purified U1 snRNP was incubated with the in vitro transcribed, 32 P-labeled and folded RNA. We observed an enhanced binding of U1 snRNP to the 5 0 ss in the absence of the RNA secondary structure (i.e. the antisense mutant, Figure 6D, lanes 4, 6 and 8) and reduced binding upon formation of the structure ( Figure 6D, lanes 3, 5 and 7). These experiments demonstrate that splicing regulation in MLV uses restricted access of U1 snRNP to the 5 0 ss exerted by the upstream secondary structure and limited complementarity of the primary 5 0 ss sequence to U1 snRNA. DISCUSSION As presented here, murine leukemia virus uses a dynamic interplay between RNA secondary structure and the intrinsic strength of the 5 0 ss to restrict access of U1 snRNP to its 5 0 ss, which ultimately results in alternative splicing and full viral gene expression. Secondary structure has been implicated in alternative splicing early on (18,(58)(59)(60). Cellular examples of attenuated 5 0 ss, which are part of a secondary structure, were discovered in association with different genetic diseases (56,61,62). Modulation of splicing efficiency by RNA secondary structures has recently also been described for the HIV-1 leader RNA structure, where the major 5 0 ss is embedded in a semi-stable hairpin (63). However, contrary to HIV-1, splicing regulation at the MLV 5 0 ss seems to be much more complex, since stem mutations even 25 nt upstream of the 5 0 ss already provoke an 8-fold enhancement of splicing ( Figure 1). Certainly, secondary structures cannot only sequester 5 0 ss, but also modulate the binding efficiency of hnRNPs or SR proteins. On a global scale, it was shown that splicing enhancers and silencers are present mostly in single-stranded regions (64). There is also a particular example in HIV, where a change in secondary structure allows hnRNP H to bind and influence splicing (65). RNA secondary structures are also statistically associated with alternative 5 0 ss (66), allowing splice site selection via conformational variability. An exchange of the upper part of the structure with a heterologous stem-loop proved that the main function of the stem is to force the 5 0 ss into an inhibitory conformation and the extent of splicing inhibition correlates closely with the free energy of the structure (Figure 4). Thus, the secondary structure can be viewed as a silencer element, which converts a strong 5 0 ss into a weak one. Not surprisingly, the complex RNA stemloop possesses additional functions in the viral life cycle. The structure allows looping of the primer binding site, which binds a cellular tRNA as a primer to initiate reverse transcription (3). Yet, splicing regulation could be transferred to a complete provirus and to a heterologous HIV 5 0 ss (Figures 2 and 3). Using the MLV/HIV hybrid plasmids, it seemed that the degree of 5 0 ss inhibition correlates inversely with the overall RNA amount ( Figure 3C). Similar effects have been observed in other studies, where splice sites are able to enhance transcriptional elongation (67) and gene expression in general (68). It appears that the CMV promoter is highly dependent on this positive feedback exerted by the interaction of U1 snRNP with a proximal 5 0 ss (69,70). This suggested to us that the secondary structure restricts access of U1 snRNP. Additional evidence was obtained by tethering RS domains to the heterologous stem-loop, which enhanced splicing ( Figure 5). The relatively modest effects are most probably due to the insertion of only a single MS2 binding site ( Figure 5A). The MS2ÁFG mutants used here can partly compensate Figure 1C. As a control, a plasmid encoding only the MS2 coat protein was co-transfected as indicated. The RS domain of SRp55 was fused to the modified MS2 protein. The RNA species are marked on the right. (C) Phosphoimager analysis. Splicing efficiency is displayed as unspliced/spliced RNA ratio. Student's t-test was performed using mean values from four independent experiments. Ã P = 0.03; ÃÃ P = 0.009; ÃÃÃ P = 0.002. for the lack of multiple binding sites. RS domains may play a dual role in splice site selection. They can enhance RNA:RNA interactions at degenerated splice sites (47,71) or engage in protein:protein interactions, since an excess of SR proteins can select 5 0 ss in the absence of U1 snRNP (72). In our case, the RS domain may directly assist U1 snRNA binding to the attenuated 5 0 ss by interacting with U1-70K (73). In addition, the retroviral 5 0 ss is characterized by a mismatch at position +6. Reversion of this position into a complementary nucleotide to U1 snRNA enhances splicing 2.5-fold. Although this position is less conserved on a genome scale, it turns into a preserved nucleotide if the 5 0 ss is degenerated or weakened by surrounding elements (10). For example, a U6C mutation in the 5 0 ss flanking exon 20 of the IKBKAP gene causes exon skipping, resulting in familial dysautonomia (74). In this case, pre-existing attenuation is due to a weak upstream 3 0 ss and possible splicing silencers (75). An additional weakening of the 5 0 ss finally results in the disease-causing splicing phenotype. In this line, position +6 of the MLV 5 0 ss regulates efficiency only in the context of the secondary structure. Therefore, there is a dynamic interplay between structure and the 5 0 ss sequence, where the structure makes the 5 0 ss susceptible to silencing as it was observed for cellular silencer motifs (76). The same result was obtained when we fused the complete second intron from b-globin to the MLV secondary structure. Alternative splicing occurred only after lowering the complementarity at positions +6 or À3 of the globin 5 0 ss (Zychlinski,D. and Bohne,J. unpublished data). We also mutated positions +7 and +8 of the MLV 5 0 ss back to the consensus and observed an enhancement of splicing (data not shown). However, the enhancement was not as strong as observed for position +6. One may speculate that not solely complementarity to U1 snRNA, but also the length and neighborhood of the 5 0 ss:U1 snRNA duplex determine the strength of a splice site. This novel dynamic interplay of RNA secondary structure and low complementarity to U1 snRNA of the Figure 4A. The RNAs were incubated with increasing amounts of purified U1 snRNP (50 ng, 125 ng and 250 ng). 5 0 ss demonstrates the fine-tuning of alternative splicing in retroviruses and mammalian cells and illustrates the sophisticated organization of the splicing code. SUPPLEMENTARY DATA Supplementary Data are available at NAR Online.
2014-10-01T00:00:00.000Z
2009-10-23T00:00:00.000
{ "year": 2009, "sha1": "88500ec9af5ffe8c1fb87006fafdbc29b64db629", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/37/22/7429/16755109/gkp694.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a9a7d6be4e0c13201d134c00df2cc7af2a68d5c0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233699518
pes2o/s2orc
v3-fos-license
Fabrication of ultra-small GSH-AgNCs with excellent eschar penetration and antibacterial properties for the healing of burn infection wounds There is evidence of bioburden as a barrier to chronic burn wound healing. Compared to traditional therapy, nanotechnology has availed a revolutionary approach to therapeutic and diagnostic applications in burns. In this article, we developed the glutathione-protected Ag nanoclusters (GSH-AgNCs) to manage burn wound infection. Owing to the specific structure, the GSH-AgNCs emitted strong red fluorescence under UV excitation, quantified via both in vivo and in vitro techniques. The GSH-AgNCs showed a significant inhibition potential on the proliferation of Staphylococcus aureus ( S. aureus ), Escherichia coli ( E. coli ) , Pseudomonas aeruginosa ( P. aeruginosa ) , and methicillin-resistant staphylococcus aureus (MRSA), hiding under the eschar. Of note, with 2-6nm particle size, GSH-AgNCs are effected in renal excretion, advocating for their biomedical and pharmacological applications. Introduction Skin is the largest organ in the body. It is a crucial barrier that blocks the entry of pathogens in vivo. Burn injury-associated impairment of the anatomical structure and function cause wounds [1] . Of note, the openness of burn wounds, protein coagulation, and exudation of plasma components create ideal conditions for bacterial growth. Consequently, invasive infection complications caused by sepsis readily occur, particularly when the systemic immune function is compromised [2,3] . An estimated 180,000 burn injury-associated deaths are reported every year, according to the World Health Organization. Sepsis and the accompanying invasive infection remain the primary causes of such burn deaths [4][5][6][7][8] . So, antibacterial treatment is essential for burn injury healing. Although antibiotics are applied as a traditional burn infection therapy [9,10] , their long-term use has contributed to a rise in cases of microbial drug resistance, organ damage, allergic reaction, among other adverse side effects [11,12] . A series of new approaches have been proposed as escape routes for such complications, including new antimicrobial agents (such as antimicrobial peptides, amino acid-based surfactants, etc.), antimicrobial photo-and ultrasound-therapy, and natural products such as honey [13][14][15][16][17] . Among these, nanomedicine as a novel approach has been firmly applied to manage burn infections. Systemic antibacterial drugs can hardly land on the infected site; thus, topical antimicrobials and dressings are generally employed to enhance wound healing and resist reinfection [18] . Full-thickness burn wounds usually form eschars, making it difficult for topical antibacterial agents to act on deep infections [19] . Moreover, the excellent penetrating ability of nanoclusters has received immense applications in transdermal drug delivery, brain-targeting, and tumor targeting therapy [20][21][22][23] . Despite unclear mechanisms underlying tissue barrier permeability, accumulating evidence insinuates that the small size of nanomaterials exerts a pivotal role in tissue penetration. Numerous reports also found that Ag nanoparticles smaller than 30nm can penetrate the deepest part of the stratum corneum [24][25][26][27][28] . Thus, we hypothesize that the GSH-AgNCs (2-6nm in size) designed in this work could demonstrate good permeability. The widespread application of nanotechnology in medicine is highly promising in the treatment of several diseases. Regrettably, their toxicity remains a potential limiting challenge to developing various nanotechnologies for clinical translation [29] . Numerous contributes to the cytotoxicity and genotoxicity of silver nanoclusters; for instance, the first toxicity reaction after in vivo injection with sliver nanoclusters forms the protein corona. Biological response to these coronas is critical for nanotoxicology. Various proteins (i.e., human serum albumin, tubulin, ubiquitin and yeast extract proteins, etc.) are adsorbed onto the surface of sliver nanoclusters [30] . Glutathione is a natural tripeptide with a high affinity to metal surfaces. Silver nanoclusters protected by GSH potentially acquire better biocompatibility by blocking the formation of corona [31] . Particle size is an important factor influencing nanocluster distribution. Smallersized nanoparticles are more widely distributed than large-sized ones. In particular, reports show that 10nm diameter silver nanoparticles are enriched in several organs (e.g., liver, spleen, kidney, testis, thymus, heart, lung and brain, etc.). Larger nanoparticles are specific to blood, liver, and spleen, implicating that 2-6 nm diameter silver nanoparticles are more evenly distributed and less enriched in tissues [32][33][34] . In vivo, the nanoparticles continuously induce inflammation and other toxic reactions. Thus, if they do not get cleared, the long-term fate of organ affords is not clear. The renal filtration threshold of nanoparticle size is generally 5.5nm. However, there is no report on whether larger nanoparticles are metabolized by the kidney, which may be related to its rigidity [33,[35][36][37] . In most cases, sepsis occurs secondary to severe infection, pneumonia, burn, and major surgery. Therefore, systemic administration must be considered for effective invasive infections prevention and treatment [2,8] . Since pharmacokinetics after burn trauma is challenging to predict, real-time therapeutic drug monitoring (TDM) is essential for most antibiotic therapy for burns, which maintains the safe plasma concentration [38,39] . The pharmacokinetic models of NPs are considered multicompartment models. However, its distribution and excretion are affected by various factors, posing challenges in predicting the risk of adverse effects using plasma concentration. Herein, inspired by tumor fluorescent markers, we propose a new strategy for adjusting the dosage and enhance the safety of nanomedicine through in vivo examination of nanocluster metabolism [40][41][42] . This work proposes a new approach to reduce health risks by integrating fluorescent tracer and renal excretion. It is argued that GSH-AgNCs possess an excellent broad-spectrum antibacterial activity, which potentially promotes wound healing. It introduces eschars into the bloodstream to prevent secondary sepsis and invasive infections. The unique fluorescence-exciting properties can grasp the degree of enrichment in the tissue. This justifies the excellent permeability and antibacterial effects of GSH-AgNCs. Nevertheless, excretion greatly improves the antibacterial activity and biosafety of silver nanoparticles. Preparation of glutathione-protected Ag nanoclusters The material synthesis process was adopted as previously described. Briefly, 125μL of 20 mM silver nitrate and 150 μL of 50 mM glutathione were dissolved in 4.85 mL of water. Afterward, 50 μL of 0.1 mM sodium borohydride dissolved in 0.2 mM sodium hydroxide was added for 5 minutes, aged for 3 h. Then, 50 μL of boron was added. The sodium hydride solution was stirred for 15 min and aged for 6 h. Characterization of GSH-AgNCs The structural and size properties of the GSH-AgNCs were examined using transmission electron microscopy JEOL 03040701(Kabushiki Kaisha, Japan). We applied the fluorescence spectrometer FS5 (Edinburgh Instruments, UK) to measure the fluorescence spectra. UV-Vis absorbance spectra were recorded with a Cary UV-Vis spectrophotometer. Cell culture and in vitro cytotoxicity experiment The cytotoxicity of GSH-AgNCs was evaluated through direct culturing of HSF cells in high glucose DMEM supplemented with 10% FBS and 1% penicillin/streptomycin. This was followed by incubation at 37°C in a humidified atmosphere of 5% CO2 /95% air. For long-term toxicity evaluation, we assessed cell growth. Briefly, 1×104 cells were seeded on a 24-well plate, and then treated with 125μM, 250μM GSH-AgNCs for 1, 3, or 7 days. Cell growth was evaluated under the microscope. In vitro antibacterial activity The spread plate method was adopted to evaluate in vitro antibacterial activity. Two Using Oxford cup, we further evaluated the transdermal antimicrobial efficiency of the GSH-AgNCs. Obtained eschar skin from rabbit infected burn wound. Afterward, the eschar was placed on plates previously inoculated with the pathogen. Different concentrations of nanoclusters dispersions were added to each Oxford cup and placed on the above artificial eschar. After 24 h-incubation at 37°C, images of the bacterial zone of inhibition were taken using camera. Fluorescence imaging in vivo The fluorescence of the GSH-AgNCs was assessed using In Vivo Imaging System The microstructure destruction of organs was evaluated via HE staining. Briefly, organ tissues from harvested heart, liver, spleen, lung, and kidney from the above mice were fixed with 10% buffered formalin for 24 h. Tissues were then paraffin-embedded, sectioned, and subjected to HE staining following the manufacturer's protocol. Histopathological changes were examined under a light microscope. Burn infection model New Zealand white rabbits were purchased from the Institute of Animal Science of Nanchang University and randomly categorized into two groups. We adopted the following steps to cause burns and infections. Briefly, after removing the back hair of the rabbits, they anesthetized through the marginal ear vein of chloral hydrate at a dose of 250 mg/kg. Borrowed from previous reports, a full-thickness burn wound was made on the dorsal skin using copper coin (11mm, 92°C, 25s). The Staphylococcus aureus suspension (200μL, 1×105 cells mM-1) was injected subcutaneously into the burn. The GSH-AgNCs suspension was dropped on the eschar after the wound had scabs. After that, the wound was covered with gauze and fixed with an elastic bandage. Images of the wound at the planned time points (0, 3, 7, and 10 days) were taken. The wound was cut and opened with scissors to check for infection. The surrounding gauze was removed. The antibacterial effect and tissue inflammation were evaluated using the hematoxylin-eosin staining kit. Statistical analysis Statistical computations were performed using one-way ANOVA between multiple groups. Data were expressed as mean ± standard deviation (SD). All statistical computations were performed using GraphPad Prism 8.0.1. The significance level was set as p < 0.05. Characterization of GSH-AgNCs The prepared GSH-AgNCs were characterized via TEM, UV-Vis absorption, and fluorescence spectra (Figure 1). We have presented a typical TEM image of GSH-AgNCs nanoparticles in Figure 1a. GSH @ Ag NCs were spherical, well dispersed, and Figure 1d. Notably, the excitation spectrum monitored at 605 nm produced a broad excitation band from 400 to 500 nm (black line). Further, GSH-AgNCs showed a sharp excitation peak (red line) at 450nm. The synthesized cluster-shaped silver nanoparticles emitted red fluorescence visible to the naked eye under ultraviolet light, which could be explained by the formation of fluorescent AgNCs. Based on previous findings, the characteristic absorption peaks at around 380-500 nm as found in GSH-AgNCs are for larger Ag nanoparticles [31,43]. This could be attributed to the interparticle assembly of uncapped silver nanoclusters. After staining the dead cells (red), both the 125μM and 250μM GSH-AgNCs experimental groups survived, slightly less than the average cell number of the control group. It was suggested that GSH potentially reduced the toxicity of silver ions, thereby promoting the proliferation of HSF cells. Proliferation and in vitro viability Furthermore, CCK-8 analysis was undertaken to quantitatively analyze the toxicity of GSH-Ag NCs to LSF cell survival. We used different concentrations of GSH-Ag NCs and HSF for 1, 3, and 7 days of co-cultivation to validate the results of the cck8 experiment (Figure 2c). Of note, the toxicity of GSH-Ag NCs to cells was dose-dependent. Compared to the control group, the survival rate of cells below 250μM was not affected. Collectively, GSH-AgNCs concentration lower than 250μM demonstrated satisfactory biocompatibility, which is beneficial to the growth and proliferation of HSF cells, thus, has potential biomedical applications. In vivo biocompatibility Silver nanoclusters display intense colors due to the collective oscillation of conduction electrons as they interact with light. Owing to their fluorescent properties, metal nanocluster materials received wide application in tumor imaging, microRNA detection, and DNA base detection [49][50][51]. Here, the synthesized silver nanoparticles had been tested for their in vitro fluorescence capabilities previously. The real-time fluorescence in vivo performance of GSH-AgNCs was investigated on Kunming mice (Figure 3a). Similarly, it demonstrated good fluorescence imaging capabilities in vivo, thus can be applied as a dynamic biological probe. We observed the in vivo fluorescence signal after injection (Figure 3a). The signal of nanoparticles was further attenuated over time, which was not due to the fluorescence quenching of GSH-AgNCs but because they were excreted. After 24 h post-injection, most GSH-AgNCs had been cleared out. The accumulation site of residual nanoclusters confirmed that GSH-AgNCs is renal excretion type, consistent with the theoretical pharmacokinetics of 2-6nm diameter nanocluster. Results of major liver and kidney function tests (including ALT, AST, ALP, BUN, SCR, TBIL) after intravenous injection of GSH-Ag NCs suspension are presented in Figure 3b. Compared to the control group, the experimental group had no significant difference in various indicators on the 7th day. This implied that GSH-AgNCs nanoparticles did not influence the normal liver and kidney basic physiological functions of the mice post injection. H&E-stained tissue sections from the main organs (heart, liver, spleen, lung, and kidney) of km mice 7 days after a single intravenous injection of GSH-Ag NCs suspension are depicted in Figure 3c. Mice in the control group received 200 μL of normal saline intravenously but showed no obvious pathological changes. In vitro antibacterial activity The microbial load reduces burn wound contraction, and eventually, death may occur. Silver and associated preparations have been applied as antimicrobials for thousands of years [46] . Though the mechanism of antibacterial activity of AgNCs remains elusive, theories are hypothesizing that they (Ⅰ) react with thiol moieties of enzymes and proteins, (Ⅱ) produces free radicals, (Ⅲ) causing membrane structure damage [47,48] . It is worth mentioning that the specific surface area of sliver nanoclusters exerts a crucial role in the adsorption and destruction of proteins. Herein, the antibacterial property of GSH-AgNCs was quantitatively assessed against both Gram-positive and Gram-negative bacteria using CFU count. The data are presented in Figure 4a Eschar formation in the infected area of burns is an important factor in antibacterial treatment failure [7] . Referring to the upper schematic diagram (Figure 4b), adding different concentrations of GSH-AgNCs to each Oxford cup enhance the penetration of nanoclusters of small particles into skin eschar at the bottom. Eventually, the Staphylococcus aureus pre-inoculated on the plate die. The image of the bacterial inhibition zone taken after adding cluster silver to the Oxford cup followed by 24hr incubation at 37°C is shown in Figure 4b. GSH-Ag NCs penetrated the eschar, exerting an antibacterial effect. The variation in the area of the inhibition zone with the concentration of GSH-AgNCs shows that it can penetrate the eschar, killing the bacteria in the wound and blood at the bottom of the eschar. We speculate that this may be due to the small size of clustered silver nanoparticles, and the naturally occurring small peptide GSH can act as a protective shell, so GSH-AgNCs show excellent permeability in the eschar tissue. Treatment of infected burn wounds After introducing a burn on the shaved back of New Zealand white rabbit using a preheated brass block (92°C) for 25 seconds, Staphylococcus aureus was injected subcutaneously into the wound. Two days later, the wound crusted. An abscess occurred after cutting the wound with scissors. This meant that a closed full-thickness burn wound infection model was successfully established. As shown in the schematic diagram of Figure 5a,to evaluate the effects of the GSH-AgNCs on the closing burn wound in vivo, a full-thickness burn wound infection model was used in the rabbit. The subsequent wound repair processes of the treated burn infection wounds were tracked over 10 days. The wounds were either dressed with PBS (blank) or treated with GSH-AgNCs. Significant differences in wound healing were examined during the entire process. The visual observations of burn wounds treated with GSH-AgNCs at different post-operation time points are presented in Figure 5b. Notably, wounds treated with GSH-AgNCs showed a significantly higher wound healing rate than blank groups tested. Such wounds appeared to be fully recovered at day 10. On the contrary, we reported a significant delay in the closure of wounds in the blank group. Pus was discharged from the wounds, suggesting the development of severe skin infection. These observations were validated through HE staining ( Figure 5c). The GSH-AgNCs treated wounds had more capillaries and fibroblasts. After 10 days, no obvious inflammation occurred, and the healing outcome was better. Nevertheless, there were several neutrophils and necrotic nuclei under the infection wound in the control groups. Taken together, significantly inhibited inflammation, less necrotic tissue, more regenerated skin appendages, and well-wound closure are better healing outcomes in GSH-AgNCs treated burn infection wounds. Conclusions This study developed a silver nanocluster through glutathione capping, functioning as an antibacterial drug with promising potential for eschar penetration, fluorescent tracing, and renal excretion for burn infection wound healing. Of note, small particle size and surface chemistry offer GSH-AgNCs an excellent permeability; therefore, it kills the bacteria under the eschar and prevents secondary sepsis upon entry into the systemic circulation. In vitro and in vivo toxicity analyses show that GSH-AgNCs are less toxic; they undergo renal excretion and get cleared within 24 hours. Also, they maintain fluorescence excitation with a certain intensity in the body for a given time, thus can be adopted as a fluorescent quantitative method to assess the distribution of nanocluster in vivo and regulate the amount of dose to avoid accumulated drug toxicity event. GSH-AgNCs treated wounds demonstrate accelerated wound healing evidenced by faster wound closure, Infection suppression and inflammation inhibition compares to untreated wounds after 10 days. However, future work should focus on the toxic dose effect of GSH-AgNCs on different organs, and establish a model for evaluating in vivo fluorescence intensity and the number of GSH-AgNCs. This would pass the reference for the transdermal treatment system.
2021-05-05T00:09:02.726Z
2021-03-17T00:00:00.000
{ "year": 2021, "sha1": "35a7d0b8865cd09b2e2c476cb4e93b33c22a06c7", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-302056/v1.pdf?c=1616027953000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "3c42f5bada0cb2d49522399ce0e3659b52b0ed54", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
256453009
pes2o/s2orc
v3-fos-license
Angular error reduction of a machine-vision system using a trapezoidal trajectory velocity : Nowadays, laser scanners play an important role in technology and research, when it comes to measure 3D coordinates physically and in real time of any object under observation. Laser scanners have to measure the 3D coordinates of these objects in shortest time and with the smallest measuring error. A novel laser scanner called Technical Vision System (TVS), which uses Dynamic Triangulation and DC motors, was developed and proven to represent a fast and reliable scanning system. Previous research used the step response of the TVS mechanical actuators (DC motors), which can overshoot if certain controller parameters of the closed-loop control are selected. If the overshoot is undesirable, a trade-off must be made between speed and overshoot of the step response. Thereby, present paper describes a new approach to control the actual angular position of the TVS DC motors using a trapezoidal profile for the DC Motor velocity. The DC motors step response is replaced by a trajectory response, which allows the reduction of the relative angular error of the DC motors shaft to a value less than or equal to 0.1 percent. Also, the design of the control system, to implement the trapezoidal trajectory profile for the DC motor velocity in practice, is described in detail. Introduction Laser scanners as sensors in machine vision systems play an important role, when it comes to measure three dimensional coordinates of any object under observation.Compared to other types of sensors, laser scanners offer advantages, because they do not estimate the 3D data of these objects, but determine their physical position and in real time using different measurement methods (Sergiyenko & Rodriguez-Quiñonez, 2016).Thereby, laser scanners capture stereo data of the scanning object without having to compare between different scanning points, as is the case with cameras, for example.This allows to design laser scanners as single sensor systems, with low requirements of data post-processing.However, most laser scanners today still use cameras to receive the reflected laser beam and convert it into a matrix of digital signal values.Compared to single-sensor photodetectors, cameras have significant disadvantages, which can be defined using the precision of real physical values measurement and the speed of post-processing of these values.The main task of laser scanners can be defined by the determination of observed objects 3D coordinates, which is mainly realized using optical methods in a very large number of applications.3D mapping (Zhang & Singh, 2017), autonomous robots, automatic inspections, coat thickness measurement, object existence and location detection, show clear examples of accurate positioning technologies.Besides, most of those are constituted using mechatronics systems, which integrate DC motors as main electromechanical actuators.A novel Technical Vision System (TVS, prototype No.3) was developed (Lindner, Sergiyenko, Rivas-López, Hernández-Balbuena, et al., 2017), which represents an optoelectronic machine vision system capable of measuring 3D coordinates in the TVS field-of-view (FOV), to determine the information of scanned surfaces for scientific and industrial applications.The TVS has the key advantage over cameras that it is basically designed and constructed as a single-sensor, high-speed, and ultra-low cost laser scanning system, which uses the dynamic triangulation measurement method and trigonometric functions, to measure 3D coordinates of any object under observation (Lindner, Sergiyenko, Rodríguez-Quiñonez, et al., 2016;Lindner, Sergiyenko, Rivas-López, Ivanov, et al., 2017).Dynamic triangulation, in comparison to classic (static) triangulation, allows a faster acquisition of all 3D coordinates in the FOV of a laser scanner.Since with dynamic triangulation only the laser point is moved over the surface of an object under observation, and not the entire technical system, higher dynamics and thus significantly reduced scanning time can be achieved.A typical application for dynamic triangulation can be found in the navigation of mobile vehicles, where fast acquisition of 3D coordinates plays an important role.Using the novel approach of dynamic triangulation and the TVS, the mobile vehicle FOV can be defined regardless of its movement trajectory, and thus the vehicle can map and capture a new environment in shorter time and with less energy usage, than with static triangulation or multiple sensor systems.When designing this TVS, two special requirements were defined in order to simplify significantly the prototyping of the system.The first requirement was defined by the use of off-the-shelf mechanical, electromechanical and optical elements and the second requirement by the use of open-source hardware and free software tools.This allows the general reproduction of our TVS and realization as an ultra low-cost laser scanning system. To implement the dynamic triangulation measurement method in practice, the TVS consists mainly of a laser transmitter and receiver.The laser transmitter of our TVS was realized as the Positioning Laser (PL), shown in Fig. 1, and contains the following main components: a Maxon brushed DC motor with incremental encoder providing 1000 pulses per revolution, a 45° cut mirror and a 10 mW laser module.To aim the laser beam precisely at an observed object surface, the actual angular position of the DC motor must be controlled with the smallest possible angular error.This has been achieved by implementing a proportional algorithm (P-Algorithm) in closed-loop (CL) configuration, using a microcontroller (Lindner, Sergiyenko, Rivas-López, Rodríguez-Quiñonez, et al., 2016).In this previous research, the P-algorithm controlled the actual angular position of the DC motor shaft, using a constant reference angular position, which leads to a step response of the DC motor shaft.A positioning time ≤ 100 ms and a relative angular error ≤ 11.10% was achieved (Lindner, Sergiyenko, Rivas-López, Rodríguez-Quiñonez, et al., 2016).Due to static and dynamic non-linear friction effects (Virgala et al., 2013;Virgala & Kelemen, 2013), the DC motor shaft thereby is always positioned with an angular error. Further reduction of the positioning time and relative angular error is desired, thus present paper describes a new approach to control the actual angular position of the PL DC motor, using a trapezoidal profile as trajectory for the reference angular speed () of the DC motor.This approach allows a reduction of the positioning time to ≤ 44 ms and a reduction of the relative angular error to ≤ 0.1%, compared to the implementation of a P-algorithm using a microcontroller (Lindner, Sergiyenko, Rivas-López, Rodríguez-Quiñonez, et al., 2016).To design the control system, which applies the trapezoidal trajectory profile for the DC motor velocity in practice, the LM629N-8 IC from Texas Instruments is implemented as a motion controller system specially developed by our research team.The first version of the motion controller systems consists of a self-developed Arduino shield for the Arduino Mega 2560, which can be seen in Fig. 8.The second version of this motion controller is currently being developed by our research team.The paper is organized as follows.After introduction, the next section gives an overview of laser scanners, which or not use cameras, and what essentially distinguishes these systems from our novel approach.The third section describes the used theoretically concepts to reduce the relative angular error of our TVS.The following section four defines the methods used to implement the trapezoidal trajectory profile using the digital controller LM629 and how this controller is integrated in the actual developed TVS.The fifth section describes the experimental realization for determining the actual absolute position error of the TVS PL, which experimental results are summarized in section six.The last section presents conclusions and further research tasks. Related works Laser scanners, which use cameras to detect the reflected laser beam, can be found in numerous and even older articles.Schmalfus presents one of the first papers which compares laser scanners versus CCD cameras for automated industrial inspection (Schmalfuß, 1990).Podsedkowski et al. (1999) present a method for robot navigation, using different sensors for estimating current position and orientation of the mobile robot system.The robot orientation system is based on a laser scanner brand SICK LS200.The combination of both laser scanner and camera is used in (Acosta et al., 2006), which introduces the design and development of a 3D scanner, using a line laser and a webcam.From this point, an increasing amount of research about laser triangulation using cameras is widely found in scientific literature (Abu-Nabah et al., 2016;Emam et al., 2014;Idrobo-Pizo et al., 2019;Kienle et al., 2020;Kim et al., 2018;Lee et al., 2017;Lehtomäki et al., 2016;Lin et al., 2017;Mill et al., 2013;Xiao et al., 2015).However, research that implements 3D scanners without cameras is much less available.After an intensive literature search, no work on 2D or 3D scanners was found that only uses off-theshelf optical and electromechanical elements, to implement the laser scanner.This means, that in all found research a standard industrial 2D or 3D laser from different brands was used.The literature review also revealed, that for moving and positioning these laser scanners also industrial solutions, in the form of servo systems (electrical motor and controller), was used.Kolu et al. (2015) describes an obstacle mapping method for navigating a multipurpose loader, using a tilted 2D laser-servo system, consisting of a SICK laser scanner and a tilting servo module.Martinez et al. describes the construction and calibration of a low cost 3D laser scanner for mobile robot navigation (Martínez et al., 2015), which furthermore uses a DC servomotor equipped with a gear unit.Using a gear units increase the systematical error of the system, which among others is due to the backlash present in every gear.Also, this work uses a Hokuyo laser scanner, which represents a highcost commercial solution.Moon et al. (2015) offers a 3D laser range finder for object recognition, which also uses a high cost industrial solution from SICK.The presented system consists of a 2D laser range sensor working measurement and a servo drive controlled by an embedded computer running Linux.Bauersfeld et al. (2019) present a 3D laser scanner, developed by adding an additional axis of rotation to a planar tilted 2D laser scanner, which is moved using a Dynamixel servomotor.As last example Singh et al. (2020) presents a modified 3D laser system assembled from a 2D laser scanner coupled with a servomotor is presented.The UTM-30LX laser scanner is coupled with a servomotor, which are used to generate a 3D point cloud for mobile robot navigation.Also, most research developing 2D or 3D scanners does not use open-source hardware or free software.For example, Viola et al. (2017) design and evaluate a fractional order PID controller, whereby different control strategies are implemented using the MATLAB stateflow toolbox and a NI data acquisition system.To prove the proposal of an DC motor speed regulator using active damping, Kim and Ahn (2021) uses a Quanser QUBE-Servo2 and the myRIO-1900. In summary, it can be said, that after intensive literature research, not a single project was found, which pursues our novel approach of combining a laser triangulation system with DC motors, without using cameras.The unique design principle of our Technical Vision System is described in numerous previous research (Básaca-Preciado et al., 2014;Lindner, Sergiyenko, Rivas-López, Hernández-Balbuena, et al., 2017;Lindner, Sergiyenko, Rivas-López, Ivanov, et al., 2017;Lindner, Sergiyenko, Rivas-López, Rodríguez-Quiñonez, et al., 2016;Lindner, Sergiyenko, Rivas-López, Valdez-Salas, et al., 2016;Lindner, Sergiyenko, Rodríguez-Quiñonez, et al., 2016;Reyes-Garcia et al., 2018, 2018;Reyes-García et al., 2019;Rodríguez-Quiñonez, Sergiyenko, Hernandez-Balbuena, et al., 2014;Rodríguez-Quiñonez, Sergiyenko, Preciado, et al., 2014) and partly presented in this paper.This can be summarized by the following properties.Our system represents a singlesensor, high-speed, ultra low-cost laser scanning system, which uses the dynamic triangulation measurement method and DC motors controlled in CL configuration, to physically measure 3D coordinates of any object under observation, without using large post-processing or cameras.It must be emphasized that for the TVS mechanical design no gears have been implemented, to further reduce the systematic error, caused by gear backlash and other non-linear effects of gear mechanism.All DC motors are controlled directly and without gears in CL configuration, which reduces the absolute angular error and the positioning time of all TVS DC motors shaft.It must also be emphasized, that for the overall TVS design only off-the-shelf mechanical, electromechanical and optical elements and open-source hard and free software tools were considered, which highly reduces the complexity and highly improves the reproducibility of our 3D laser scanning system. Theoretical concepts Previous research has shown the reduction of the angular error of the TVS PL DC motor shaft, by improving the CL positioning algorithm and using high-quality DC motors.In addition, the positioning time (step response rising time ) has been reduced, maintaining the relative angular error less than 1 per cent (Lindner, Sergiyenko, Rivas-López, Hernández-Balbuena, et al., 2017).However, due to static friction in the ball bearings of the DC motor, an angular error no-zero will always be presented, which can be estimated using a nonlinear friction model (Lindner, Sergiyenko, Rivas-López, Ivanov, et al., 2017).Note, that if a constant value is used for the reference angular position = ., then the actual angular position () of the DC motor shaft is stabilized around the final angular position ∞ = ( → ∞), resulting in a greater final angular position error ( → ∞) = − ∞ due to the second order system response and the overshoot of the DC motor.On the other hand, if a trapezoidal profile is used for the reference angular speed (), which by integrating results in the reference angular position () as a ramp function, the actual angular position of the DC motor shaft () is stabilized around this ramp function using a PID controller, resulting in a smaller final angular position error and preventing the overshoot of the DC motor shaft.Fig. 2 shows the trapezoidal profile for the reference angular speed (), with 3 = 1 + 2 , which can be expressed using the unit step function (): transforming (1) to the frequency domain ℒ( ()) = Ω (), it yields: (2) The well-known model of a DC motor with permanent magnets and back-EMF in the frequency domain is depicted in Fig. 3 and can be found in most literature (Chapman, 1985;Fischer, 2016;Hughes & Drury, 2019).Here, represents the electrical gain, the torque constant, the mechanical gain, the back-EMF gain, the electrical time constant and the mechanical time constant, which are usually provided by the data sheet.The corresponding transfer function of the DC motor is represented by: () = Ω () () = 1 + + ( + ) + 2 ( ) , (3) Fig. 4 depicts the DC motor speed control in closed-loop configuration, using a PI-controller.The gain of the PI-controller is and the reset time .The integral part of the controller is necessary since, when the angular speed error Ω = 0, a constant angular voltage must be generated, hence the DC motor continues to rotate with a constant actual angular velocity Ω = Ω .The corresponding transfer function of the DC motor speed control in closed-loop configuration can be represented as a third-order rational function: Using the final value theorem of the Laplace transform, the final value of the actual angular position ∞ , when applying a trapezoidal input, shall be calculated: (5) Using ( 2), ( 4) and evaluating this expression yields an indeterminate form.Applying twice L'Hopital's rule and using 3 = 1 + 2 yields the final value: Thus, a possibility was found to define the final value of the DC motor shaft actual angular position using only two parameters and 2 .This defines a mathematical relationship between the shape of the trapezoidal velocity profile () (Fig. 2) and the DC motor shaft final angular position ∞ .2) between the user and the digital controller LM629, implemented using an Arduino Mega, which receives and sends single data strings to the LM629 Host-Interface (3), translating the user commands for the digital controller.The string data is divided per each command, coefficient and values, necessary to load the reference trajectory using the Trajectory Profile Generator (4).The digital controller also contains a summing junction (5), a position encoder (10) and a digital PID loop compensation filter (6), which calculates a newset point as reference value for the DC motor.The control signal from the PID filter is sent to an 8-bit PWM block (7), which generates a PWM signal, and which than is amplified using the motor driver Hbridge module L298N (8).The control signal from the PID filter is sent to an 8-bit PWM block ( 7), which generates a PWM signal, and which is then amplified using the motor driver H-bridge module L298N (8). Methodology Finally, the output signal of the incremental encoder ( 9) is detected by the position feedback interface of the LM629 (10).Fig. 7 shows the overall connection diagram of the PL DC motor shaft position control, using the digital controller LM629 in closed-loop configuration. Experimental realization According to previous section methodology, the LM629 connection diagram (Fig. 7) was developed, which represents the design of the new approach for controlling the TVS PL.The developed hardware connections and hardware algorithms were oriented for intercommunication, reading and writing operations and control logical signals, using as host an Arduino Mega, the LM629 precision motion controller, a L298N dual full bridge driver and the Maxon brushed DCX motor, which represents the actuator for the TVS PL.The developed experimental hardware connections are depicted in Fig. 8.After the experimental hardware implementation, a GUI was developed based on the LabVIEW platform, to interact with the LM629 precision motion controller.Fig. 6 depicts the front panel of the GUI, which shows the tracking of the positioning laser, using the data formatted as (x: time, y: position) and displayed in a XY Graph, to determine the DC motor shaft actual angular position (). All used experimental parameters are summarized in Table 1, which represents the minimum data required for positioning the DC motor shaft.The DC motor shaft actual angular position is represented by ().The relative angular error ′ and relative angular error average ̅ ′ are defined by: The actuator of the TVS PL includes an incremental encoder of 1024 pulses per revolution as angular resolution.Using the LM629 motor position decoder module, the digital controller quadruples this angular resolution defined by the LM629 data sheet, resulting in a maximum possible angular resolution of ≈ 0.08789°.Since the encoder outputs a digital signal and its function does not depend on calibration, the resolution of the encoder can be directly related to the measurement uncertainty of this digital signal.The counts per reference angular position are defined by: and the counts per actual angular position Nm(t) represent the discretized actual angular position ().Here, counts means the counted pulses of the DC motor encoder.Note, that the digital controller rounds the actual angular position up to the next higher discretized value in Nm counts.The acceleration () and maximum velocity () represent trajectory parameters.When executing the GUI for the first time, the experimental hardware processes the software flow defined by the LM629 programming guide.Thereby, two highlighted stages are executed.The first stage evaluates the optimal response of the DC motor shaft, searching for optimal values of PID control algorithm coefficients: , , and , which were tested using an empirical method and are summarized in Table 2.The coefficients of the PID control algorithm were also confirmed using simulation. The second stage consists in setting up the trapezoidal velocity trajectory (), entering the values: , , and , which are downloaded to the LM629 from the GUI.After evaluating the hardware and software configuration, 25 tests were made using as reference angular position = 1°, 5°, 15°, 90° and 360°. Experimental results During the experiment, various observations were made.The DC motor shaft was positioned using as acceleration = 26 rev/s 2 and as maximum velocity = 27 rev/s, taking a sample interval of = 256 µs.Table III summarizes all 25 tests submitted in experimental realization, evaluating the behavior of the DC motor shaft actual angular position and using as reference angular positions = 1°, 5°, 15°, 90° and 360°.Each reference angular position was used with 5 tests, to calculate the relative angular error ′ .The results in Table III show clearly, that by using the digital controller LM629, the DC motor shaft actual angular position is controlled with an absolute position error smaller than .That is the reason, why in Table 3 the relative angular error ′ is almost always zero.That means, since the positioning error is smaller than the resolution of the measuring system , the absolute position error cannot be measured with the LM629.In addition to the advantage of higher positioning accuracy, also the step response of the TVS PL DC motor shaft was accelerated, resulting in positioning times shorter than 44 ms. Conclusions Present paper explores a new methodology to further reduce the angular error of a laser scanning machine vision system using a trapezoidal trajectory profile for the TVS PL mechanical actuators (DC motors).Thereby, the step response of the DC motors is replaced by a trajectory response, using a trapezoidal profile for the DC Motor velocity, to reduce the angular error of the DC motor shaft.The trapezoidal profile for the DC Motor velocity is implemented using the digital controller LM629.This approach reduces significantly the relative angular error to a value less than or equal 0.1%, compared to the previously used step response (Lindner, Sergiyenko, Rivas-López, Rodríguez-Quiñonez, et al., 2016). The results in Table III show the advantages of using a trapezoidal velocity trajectory () over a constant value for the reference angular position (), resulting in higher precision and higher accuracy in each of the realized 25 tests. Table IV shows the main results of the comparison of the step response (Lindner, Sergiyenko, Rivas-López, Rodríguez-Quiñonez, et al., 2016) with the actual developed trajectory response, using a trapezoidal profile for the DC Motor velocity. As shown in this table, the relative angular error was reduced from a maximum of 11.1% to a maximum of 0.1% and the positioning time from a maximum of 100 ms to a maximum of 44 ms.That means an improvement of the relative angular error by ≈ 99.1% and the positioning time by ≈ 54%.However, it is important to determine the reliability of this new approach, considering experimental results of Table 3, when comparing them with other experimental results.Further research tasks are defined by the verification of experimental results (Table 3) using physical measurements, as well as the comparison of the digital controller LM629 against industrial positioning controllers, like for example the Maxon EPOS 24/1.Also, for the moment we are still working on optimizing the receiving part of the TVS in order to detect the reflected laser beam faster, with higher accuracy and higher precision.Hence, extensive experiments using different objects, surfaces, colors, etc. will also be subject of future tasks. Figure 2 . Figure 2. Trapezoidal profile for reference angular speed. Figure 4 . Figure 4. DC motor speed control in closed loop. Fig. 5 Fig. 5 depicts the block diagram of the LM629 implementation as digital controller in the TVS PL, which consists of 10 principal components.Starting with the first component, a Graphical User Interface (GUI) was developed to monitor the Figure 6 . Figure 6.Front panel of the new developed GUI using the LabVIEW platform. Figure 8 . Figure 8. TVS PL hardware connections.The desired angular displacements of the laser beam are represented in values of the reference angular position (). Table 2 . Experimental coefficients for feedback control.
2023-02-01T16:03:38.451Z
2022-12-23T00:00:00.000
{ "year": 2022, "sha1": "a5b7c5e3e56da0716b95abf53a60c65a05311c3b", "oa_license": null, "oa_url": "https://jart.icat.unam.mx/index.php/jart/article/download/1800/945", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d940ad92fb9d694c4df804c1fe282a2465de5cbd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
144917275
pes2o/s2orc
v3-fos-license
Memories, Lives, Lost Psycho-Sociologies from the Past and Human Behavior of Albanians in the Epoch of Capital The paper will present parts of “forgetful” social and psychological experiences between two periods: the socialist and the postsocialist one, to indicate likeness, dilemmas about missed values and necessities in special groups of population in Albania. It`ll be a qualitative study, based on semi-structured interviews and specific cases that include real and special experiences, official documents, cinematography, etc. The period after 1990 based totally on the different civil background, has brought other concepts for democracy, solidarity, public participation and surely oneself and others confines concepts. The two world generations human behaviors dilemma, is the strongest barrier for Albanian culture values. The results will tell us if there is attention and appreciation about the past among the Albanians, how they understand it now, in big vigor of localization, globalization epoch and the desire to enter in Europe. Introduction Study based on qualitative analysis, is an attempt to show the physiognomy of nowadays perceptions about postcommunist Albanian society.Bringing out the values or strong points of the past regime, may affect the actual social policies as a credible base of welfare state, or may influence the revaluation of the social capital concept and practice, both important to achieve the European integration.The paper tents to describe the situation about the question chosen.Besides literature, observation, semi-structured interviews, special attention is paid to the reporting of story-telling by people of different generations and professions, in order to understand their lifestyle, their perception of culture, place and space, the perception of the themselves and the others under socialist regime and after it.Issues are seen in the light of social work, a profession that affects the awareness of the community about its values and processes of its growth, to better orient the opportunities according the development policies in the area through lobbying. The post-1989 era was labeled "the end of history" (Fukuyama, 1992) 1 .While Lucio Magri2 ,a leading member of the Italian Rifondazione Communista, suggested: When the Berlin Wall came down the judgment of many people was one of euphoria.They saw the coming of a anew historical period marked by word cooperation, democratic advance which would provide a clear opportunity for democratic socialism with a human face.Now we can see that the reality is different and much harsher (1991, p.5). However, societies are at the same time being plunged into socially responsible the depths of their own histories.People should be guided in their evolution towards becoming citizens.Memories of socialism illicit anger and shame provoke laughter and derision; they activate feelings of rupture, trauma and loss, and conjure up images of injustice and victimhood.At the same time, more positively, they give those who remember a sense of victory, triumph, closure.As kind of social knowledge, those memories are not seeing only as strategies of "enrichment," but also as a resource for constructing cognitive frameworks where to anchor oneself existentially at a profoundly disorienting moment of transformation.In other words, we need to document and understand better not only what but also how people reminisce. To quote David Sutton (1998: 3)3 , "the past comes in many different containers bearing different labels."Those multiple "containers" are often as significant as the mnemonic messages they carry".One of the key arguments holds that "nostalgic" icons are successful because they play the cultural role of mnemonic bridges to rather than tokens of longing for the failed communist past.This phenomenon has recently become an object of scholarly research, inducing Maria Todorova when write, paraphrasing Marx, that "a specter is haunting the world of academia: the study of post-Communist nostalgia" (Todorova and Gille 2010:1)4 .Todorova argues that mainstream discourse treats nostalgia in the postcommunist world as a "malady", an anachronism, a dysfunctional attitude towards a "seductive" yet "deadly" ideology, according to moral philosopher Tzvetan Todorov.Because of the broken historical continuity, it is hard to put back together the pieces, to recompose the puzzle, since words have ceased to speak in an intelligible way. I agree that memory as a practice and a generator of social knowledge (to remember is to know) affords a productive site in which to investigate and to better apprehend the ongoing post-socialist transformation with its many unintended and bewildering consequences.So we need to understand that memory's principal concern is the present.Anthropologist Rubie Watson (1994: 6) 5 has written that "constructing the new is deeply embedded in reconstructing the old.Specifically with regard to post-socialist Eastern Europe, it seems that we construe the ongoing accumulation and possession of memories as a strategy for generating a kind of symbolic capital. Situation in Albania The last decade has been one of great political, economic and social transformation in Albania, representing one of the most notable and turbulent periods in the history of the country.Most of the changes were difficult to predict for Albanians, whose expectations of a better life at the outset of the transition from communism were enormous, characterized by a desire to become a country like the rest of Europe.Their expectations for the future were great, but the path to turning them into a reality was lined with much naive thinking.Democracy was identified with political pluralism, justice was thought of simply in terms of a reform of the legal system, a market economy was envisaged in terms of extreme liberalism and the privatization of almost everything, and economic development was viewed as unlimited external assistance and foreign investment.Of course, such a situation of great expectations existed in almost all the post-communist countries of Europe.What distinguished Albania from the other transition countries was that its point of departure was vastly different.It was the most isolated and poorest country on the continent with one of the highest percentages of rural population6 . The opening of Albanian society had made the life style and preferences of urban society (especially in Tirana) more uniform and European but, at the same time, the other part of society, rural society in particular, has returned to its traditional way of life: characterized by patriarchal families, the Kanun and blood-feuding, etc. Demographic developments together with major internal (rural to urban) migration, rural depopulation and social fragmentation, the effects of unchecked urbanization, increasing lack of social cohesion, all with long-term repercussions for the emancipation and civilization of society, have had troublesome effects both in the cities and in the countryside.Regional allegiances, held back by ideology during the communist period, have begun to resurface, exerting their influence both over political and government organization and over the country's economic and social development.The desire and willingness of people to respect the law is the one sphere in which Albanian society in transition has made the least progress.Yet, efforts to interest Albanian civil society in respecting the law by encouraging the willingness and social conscience of people to change their attitudes have proven largely unsuccessful. The political arena in Albania has evolved substantially over the last decade.During the early stages of the transition, politicians were divided into self-declared "anti-communists" and those accused of being the "heirs of communism."This extreme ideological polarization, which rather superficially equated democracy with anti-communism, also had an impact on relations between the political parties and created a climate of extreme conflict in the country which, in turn, gave rise to tension and social destabilization.Over the last few years, politics have developed new characteristics.The high level of political militancy characteristic of the earliest years of transition has been replaced by a narrower militancy, and one, which is more, focused on self-serving interests.For example, if, in the early stages of the transition, personnel changes in the administration were carried out with the idea or pretext of replacing the old class and bureaucracy, it is now access to government administration which motivates membership in political parties.The extreme polarization of the public has gradually been replaced by a certain reservation and even an indifference towards politics.Searching for clear and definite solutions, may be easy to find in particular ambivalence and divergence between honest work and corruption, between the institutionalization of the market-based and the informal sectors of the economy, between the well-being of families and illegal traffic in drugs and human beings, between the complete opening of as yet fragile and uncompetitive economy and domestic production, and between environmental protection and unchecked urbanization.Poverty as a multi-dimensional phenomenon has grown markedly in Albania and contrasts starkly with the obvious economic achievements of the last decade of transition.Whereas in the past, the widespread state of poverty was not officially recognized, the Albanians now suffer from a measurable increase in income poverty and from a rapid polarization of society reflective of increasing economic and social disparities.The indicators of unemployment have features which reflect not only rhythms of change, but also traditions, characteristics and opportunities for development in the various regions.Some old traditions, supported by the stipulations of the Kanun, began to make their presence felt in the countryside, in particular in the northeast of the country.At the same time the government and society are still powerless or remain indifferent to these issues. During the transition period, Albanian women were confronted with the revival of old phenomena of discrimination and of new phenomena which did not previously exist in Albania.When most of the state enterprises closed down in the initial stages of transition, women were among the first to lose their jobs, and finding a new job became quite a difficult challenge for them.The trafficking of girls and women for prostitution has also become a serious problem in Albanian society until some years ago.Compared to several other Balkan countries, Albania has shown the lowest level of human development 7 .This result is complex to analyze.It has to do with a combination of historical, political, economic and social factors.The Albanian transition has been negatively affected by the long drawn-out crisis in the region, in particular the war in former Yugoslavia and especially the Kosovo crisis.Society has been changing rapidly during the transition period.Earlier norms, values and convictions have been shaken deeply and society is still having difficulty finding itself regard to ethic and moral.It is being faced with many dilemmas.People are in search of new symbols and institutions with which they can identify.Thus, the problem of relations between the individual and society has come increasingly to the fore.Under the dictatorship, individuals had their own way of thinking and speaking, and a system of common values set forth by official ideology.Under these conditions, collective freedom had priority over individual freedom.As such, communism was not able to educate the people to place an active role as citizens.The transition from collectivism to individual freedom for the population has been an urgent necessity, but it has also proven rather difficult.According to this new logic, there are a few individuals (foreigners and Albanians) who own the truth and who hold the key to the future.The media on its own have reinforced this type of thinking.It is a process "without a subject" in which the ruins of the old system and the dynamic of the new economic and political mechanisms coexist.This serves in part to explain the weaknesses of democratic institutions and the difficulties involved in activating them.This mentality reflects the universe of post-communist Albanian politics and has had a negative influence on the creation of responsible citizens working actively for the democratic welfare of the country.It also reflects the fact that political discourse tends to gravitate towards charisma, demagogy and personal loyalties.The old way of thinking inherited from dictatorship continues to survive, i.e. a polarization of politics between the "we" (the convinced supporters of the true political course) and the "they" (enemies of the people and of the Party).It continues despite the fact that the now less acute distinction is difficult to make.Cultural institutions, after years of isolation, had difficulty changing their way of thinking and adapting to the new logic of a market economy (now, surely the situation and people behavior has been changed).What unfortunately characterized cultural institutions for quite some time was confusion, which caused serious damage to the country's cultural heritage and activities.The mass migration of the village and mountain population to the towns and coastal plain together with the end to the extreme isolation and social immobility which lasted half a century and was characteristic of the communist period, caused a change in the whole system of values for many Albanians and set in movement a complex process of cultural and economic change. Socialist register Let's start with a methodological point.The first, due to the classical logic of the comparative approach, is to seek 7 Human Development Report, Albania 2002 uniformities and similarities in the sea of diversities and differences and then to account for the reasons why such uniformities emerge.The second, opposite in intention, unravels specificity and uniqueness in the sea of seeming homogeneity and then explains why such diversity emerges and persists (Sztompka, 1990) 8 .According to Cyril E. Black of Princeton University in his preface to Stavro Skëndi's book Balkan Studies (1987), "Albania encompasses within its mountainous frontiers the characteristic Balkan problems-but in an extreme form." 9 In Albanian studies you can find almost nothing pure sociological according these standards and theoretical practices.Most of the effort in studies of transformation has so far been concentrated on gathering data, particularly by means of survey research, opinion polls, etc.In effect, the diagnosis of the process is very rich.But theoretical reflection has been much more limited.Paradoxically, if it exists at all, it came mostly from outsiders.Under socialism sociology should come close to history and economics as well Marks theory predicted.Strangely this was not the case.In those countries where it was officially recognized, sociology was placed somewhere between the brand of philosophy known as Marxism-Leninism or scientific communism, some forms of political reflection, and fact-oriented demographic statistics.Now the whole perspective on social change was revised: the belief in necessary was replaced by the image of "social becoming".Political science, economics, cultural analysis joined hands with sociology and a number of studies crossed the traditional disciplinary borders in many countries of East Europe, but not so much in Albania.Another issue we faced was the impact of globalization on Albanians life.In social sciences ideas, concepts, models, and theories characteristically flowed mainly between Europe and America. Findings Respecting the dynamic use of the term, the components of the memory in a community although not always carefully assessed are: environment (how it has been transformed by man), home, social relations of everyday life, traditions and customs, rituals, spoken language, music and songs, objects used, typical products, the body and the world vision.Speaking for Albania, as a post-communist country marginalized by the policies of the socialist ideology, the objective is not the protection of the past memories, not repeated at all, but the activating of the capacity to detect it as the essence of the changes.This can be valid not only in the economic and social perspective, but also in that of symbolic, because despite the fact that the process of modernization can't stop, it's necessary to stop the degradation processes of economic, social and cultural development. A great impact to write this paper I had when I read about the work made by an Albanian artist Anri Sala (1998)10 where the filmmaker finds a mute footage of his young mother, giving an official speech during a communist youth meeting in Albania.He hires an expert to read her lips in the video in order to understand what she was saying, but her once meaningful and enthusiastic declarations now appear only as empty ideological slogans.Thirty years later, the filmmaker's mother is full of disbelief when she told that really articulated those words.By re-situating these words in past times, and by tracking contemporary forms of nostalgia, remembrance, amnesia and disbelief now, we can set free the changes in ways of speaking and of signifying, and thus the political and historical changes in gender regimes and in women's lives. We found that memories in post-socialism, does not seem to follow strict sequential chronologies.While some pasts are retrieved and imbued with memorial significance, others are largely disvalued and forgotten.Some reminiscences are more privileged than others; still others have little or no memorial status at all.As well as with young and professionals, I talked to several people in their sixties and seventies who claimed they had no desire to travel back to "that place of crime" as one of them referred to the dictatorship state.Memory is constantly on our minds not because there is so little of it left, but precisely because there is so much of it.They told me about the main newspaper "Zëri i Popullit" ("Voice of the people"), where was often declared that all the generations in Albania should live and grow in a socialist country, the dictatorship of the proletariat.Today we have the same newspaper, but we don't find thing like that." Fiona, Social Worker, 38 years old, told me that nostalgia for the existence of specifics form of welfare state is one of the most mentioned by her clients, especially among third generation in some urban area in Albania.She was not wrong at all, because one elderly woman, for instance, confided in me that her personal memory of "the good life" in socialism was the only thing that sustained her in day-to-day existence as an impoverished pensioner after socialism."I have been teacher and everybody respected me.Now I live by those memories only, because the money I get are not enough even for medicines.I feel nobody.All services were free of charge." Burbuqe, near 50, with middle professional education at that time, for the moment out of job, declared:" Who said we didn't live well?Yes, not all the people, but I'm nostalgic of that time.Not only at school, but the friendship was pure and sincere.We were safe to pass the streets even in the midnight.We were all like brothers and sisters, working together and helping each -other." Alketa, near 45, electronic engineer pedagogue at university remembered the time of "zbori", the military training at school and university."Wearing that clothes, I've had always need to sleep-she said.We were not aware about the enemy, but we loved this period , because we were more near to each-other, boys and girls, it was a kind of informality that in my opinion, we wanted really to have.I remember also the "actions days" or "walking days", free of classes, where all have almost the same food: bread, butter (lucky to have), eggs and cheese… We had fruits only at seasons, also all other foods, but it is true that all these things were natural.I'm crazy today with the food.Look how much you are speaking about obesity…" The socialist period is described as one in which women had good living standards, and could interact on an equal level with men.The feeling of nostalgia is also connected to the loss of a different gender regime, as Diana, near 67, doctor said: "the socialist law was very friendly to women.Officially, in everything there was equality with men.Today I can read in the newspaper: We don't want women for this job, or women only; we want women who are thirty and nice.It was completely forbidden at the time, it would be a scandal if someone used this language (…) we were satisfied with law.But we were not satisfied with patriarchy (…) " Olimbia, 63 years old, ex-director of the bank, said: "That time?Not anymore!I remember polyester clothing laundry olive-oil Rogozhina or Vlora soap, weak without a day to rest, because Sunday was reserved for action to clean the territory behind the house, or to work voluntary in agriculture.There is no car to do the job, and I walked for km to inspect the finances of the cooperatives in all villages of the region.Saturday afternoon was the cleaning day of the house, so can you imagine what my life has been at that time?(…)" Fuat, 72 years, said " Yesterday we`ve a bad life, but we thought more positive for each-other.Today, we live more better, but thinking more bad to each-other." Dimitri, 52 years old, theater director, said "… Volunteer work was the base of active life of the children.I remember the case of traffic controller by the pioneers, the guards of honor, commemorative celebrations occasions.The best pupils helped those weak to better learning.He also said that need to respect the past memories, monuments, works of art, which are parts of the Albanian history, such as "The pyramid" in the center of Tirana, specific objects just like bunkers, and so on, as real story of our parents and ours too, as well as basic cognition for the new generation.The tradition is always connected with national identity, so it`s strongly important and crucial for a country and its people." Karafil, near 57 years, ex-military remembered the dollar-shop in Vlora city, where some people that had relatives abroad Albania changed the money, or bought "Western things"(watches, tv-s, magneto phones, radios, refrigerators, sun glasses ect).These people were different from the others in clothing and lifestyle.Now, we are freely owners, some with big problems on documents, but there is the possibility for all of us to be somebody in business too." Terminology of the today's Albanian reality brings changes in the perception for classes.It is not speaking for the "working class", nobody use the terms as "bej, bejlere", "aga, agallare" , "shok, shoqe".These terms have been replaced with terms businessman, boss, sir, madam.Albanian cinematography is not producing films for children.Now, symbolic education, even coming from the past, is realized through films as "Beni walks alone", "Rebellion in the palace", "A general caught slave,"Our friend Tili", etc..These are productions dominated by strong doses of artificial ideological as well as by overlaps of parental feelings where the first conceived parent was the Party of Labor of Albania.Highlighted in interviews was the case which turned than into humor because of its emptiness placed in the mouth of a child, in the film "In our house", produced in 1987.The mother said to her son who did a mistake living the school: "Miri, why do you do these things?To whom you look like?What are you?"The child replies:" I am yours.I belong to the state."And in another sequence when the teacher wanted to criticized him in the classroom (typically collective punishment) one girl from the class said:" …these behaviors serve only to bourgeois, to revisionist..." While the boy very irritated interrupted said: "I'm not a bourgeoisie, I`m not a revisionist.I' m Albanian.",and leave the class away.It's worthy to notice the messages about the unity, regardless of age, which value and encourage the involvement of children in all sectors of society.Their education has the collective control responsibility.Alma, teacher in elementary school comment a film: "Taulant requires a sister" as-the tendency of perception start for birth control in family in contrary with ideology trends of the time for increasing population policies.Now in the base of the films are standing migration histories, criminality, traffics, familiar trauma and human behavior alienation.
2018-12-29T07:36:16.103Z
2013-09-29T00:00:00.000
{ "year": 2013, "sha1": "a34038df6245b6f7dd2d3d449d06106ce05e8206", "oa_license": "CCBYNC", "oa_url": "https://www.richtmann.org/journal/index.php/ajis/article/download/889/920", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a34038df6245b6f7dd2d3d449d06106ce05e8206", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
252610332
pes2o/s2orc
v3-fos-license
Immunocompetent Patient With Primary Bone Marrow Hodgkin Lymphoma Hodgkin lymphoma (HL) is a hematologic malignancy that comprises about 10% of all lymphomas with the most common type being classical HL (cHL). The typical clinical presentation of cHL involves multiple region lymphadenopathy and a chest mass found on imaging. However, not all patients present with the typical symptomology of cHL which poses a diagnostic challenge. Extranodal HL, especially primary bone marrow HL (PBMHL), has been described in immunocompromised patients with human immunodeficiency virus (HIV). In this case report, we present a PBMHL case in an immunocompetent patient with no HIV exposure. We discuss a 51-year-old immunocompetent female who presented with 2 - 3 months of fever, confusion, generalized myalgias, and fatigue. She had no lymphadenopathy on physical exam. On further testing, the patient’s blood work demonstrated cytopenia and imaging confirmed no lymphadenopathy. Eventually, a bone marrow evaluation established her diagnosis of PBMHL. The patient expired after receiving one cycle of a modified chemotherapy regimen. This case illustrates that HL can be associated with an atypical clinical presentation which may delay diagnosis and treatment. PBMHL can occur in the normal population who is not immunocompromised nor HIV positive. In this situation, the best diagnostic approach is a thorough medical history, physical exam, and bone marrow aspiration and biopsy. Presence of constitutional symptoms without any lymphadenopathy or chest mass should raise the concern for possible atypical HL such as PBMHL. Accurate and timely identification of PBMHL allows for timely initiation of appropriate therapy. While cHL is responsive to chemotherapy, further research is required to improve the therapy for PBMHL. Introduction Hodgkin lymphoma (HL) is a group of lymphoid malignancies characterized by the presence of Reed-Sternberg (RS) cells intermixed with non-neoplastic inflammatory cells. Thomas Hodgkin first described HL in an autopsy report in the 19th century. These lymphomas are divided into classical HL (cHL) and nodular lymphocyte-predominant HL based on morphological and immunophenotypic characteristics. cHL comprises about 90% of the HL cases and is further subdivided into four subtypes based on the pathologic features: nodular sclerosis cHL, mixed cellularity cHL, lymphocyte rich cHL, and lymphocyte depleted cHL. RS cells are characterized with a rounded bilobed nuclei with a prominent eosinophilic nucleolus surrounded by a perinuclear halo and a weakly basophilic cytoplasm. RS cell surface strongly express CD30 and CD15 antigens but weakly express PAX5/BSAP antigens. RS cells also express PD-L1 and PD-L2, the programmed death ligands. The typical clinical presentation of cHL involves asymptomatic lymphadenopathy or a mediastinal mass on a chest radiograph [1]. The lymph nodes of the neck comprising the cervical and/or supraclavicular nodes are the most frequently involved region of the disease [2]. Constitutional symptoms including fever, weight loss, and night sweats are commonly observed. Extranodal spread of HL, especially to the lungs, liver, bone, or bone marrow, can lead to a worse prognosis. However, some patients, especially those individuals with a history of human immunodeficiency virus (HIV) infection and immunosuppressed individuals, initially present atypical symptoms. Under such situations, patients could present with alcohol-associated pain, liver function test abnormalities, skin lesions, bone marrow infiltration causing unexplained cytopenia or bone pain, or paraneoplastic syndromes affecting the nervous system or leading to nephrotic syndrome. Usually, bone marrow involvement occurs in the advanced stages of the disease [3]. Primary bone marrow HL (PBMHL) is uncommon and most often described in HIV-positive patients. Isolated bone marrow HL present an aggressive clinical course and, unlike classical HL, display a poor response to the conventional HL treatments [3]. The current standard regimen for HL management in United States is ABVD (doxorubicin (adriamycin), bleomycin, vinblastine, and dacarbazine) combination chemotherapy. ABVD has been proven to be highly effective as initial therapy with less long-term toxicities when compared to alternate therapies [4]. An alternative chemotherapy regimen called escalated BEACOPP (bleomycin, etoposide, doxoru-Primary Bone Marrow Hodgkin's Lymphoma J Med Cases. 2022;13(9):427-431 bicin, cyclophosphamide, vincristine, procarbazine, and prednisolone) has also been used for initial HL treatment or ABVD refractory HL treatment [5]. Here we present a unique PBMHL case, in which the patient presented with unexplained cytopenia and constitutional symptoms, without any sign of lymphadenopathy nor HIV positivity. The patient was diagnosed of HL via a bone marrow aspiration and biopsy. All researches conducted in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Investigations A 51-year-old HIV-negative female presented to the hospital with a 2-day history of fever, confusion, and a 3-month history of generalized myalgias and fatigue. She was originally from Somalia with a past medical history of polymyalgia rheumatica on intermittent prednisone treatment and chronic hyponatremia. Her blood work showed hemoglobin of 8.9 g/dL, white blood cell (WBC) count of 5.2 × 10 3 /µL, and a platelet count of 60 × 10 3 /µL. Her prothrombin time (PT)/international normalized ratio (INR) were mildly elevated at 20 s/1.7, serum sodium level was at 117 mmol/L. Her serum total bilirubin was 1.4 mg/dL, aspartate aminotransferase (AST) was 223 U/L, alanine aminotransferase (ALT) was 103 U/L, and alkaline phosphatase (ALP) was 198 U/L. A peripheral blood smear did not show any schistocytes, spherocytes, nor poikilocytes. The neutrophils showed toxic changes with pseudo-Pelger-Huet nuclei and vacuolization in the cytoplasm. Reticulocyte count was 6.6%, above the normal value range. Her hepatitis B and C panels were negative for either acute or chronic infection. Her abdomen/pelvis computed tomography (CT) scan showed right-sided colitis and multiple hepatic lesions that represented either liver metastatic disease or microabscesses ( Fig. 1). A magnetic resonance imaging (MRI) was conducted, but unable to distinguish these lesions. However, it revealed heterogenous enhancing bone marrow within the lumbar spine and pelvis. Her CT head was concerning for lucent lesions in the skull, and her MRI brain was suggestive of central pontine myelinolysis. Her disseminated intravascular coagulation (DIC) panel, ADAMTS13 activity, and autoimmune workup were all negative. Due to her altered mental status, a lumbar puncture was performed that yielded normal results except a cerebrospinal fluid (CSF) glucose of 96 mg/dL. Her iron panel was suggestive of anemia from chronic disease with a significantly elevation of ferritin (13,108 ng/ mL). Her serum folate and vitamin B12 levels were normal. Her lactate dehydrogenase (LDH) level was elevated (547 U/L) and her haptoglobin level was reduced (< 10 mg/dL). Her inflammatory markers were significantly elevated with Creactive protein (CRP) of 38 mg/L, erythrocyte sedimentation rate (ESR) of 60 mm/h, interleukin (IL)-2 of 84,250 pg/mL and chemokine (CXC motif) ligand 9 (CXCL9) of 11,741 pg/ mL. Her Epstein-Barr virus (EBV) polymerase chain reaction (PCR) result was 1,400 IU/mL. Diagnosis Hematology consultation was called for unexplained pancytopenia. Based on iron panel results and increased values for CRP and ESR, the initial concern was for anemia of chronic disease. Suspicion for hemophagocytic lymphohistiocytosis (HLH) was low due to the absence of splenomegaly, further corroborated with resolving thrombocytopenia and ferritin levels during her admission [6]. Suspicion for adult-onset Still's disease was extremely low due to no splenomegaly, rash, or reported history of recent sore throat. The absence of schistocytes on peripheral blood smear made thrombotic thrombocytopenic purpura (TTP) unlikely. Cryoglobulin screen was positive, with trace cryoprecipitates detected. Serum and urine protein electrophoresis was negative for monoclonal gammopathy. HFE gene mutation was negative. Her repeat CT chest/abdomen/pelvis did not show lymphadenopathy at axillary, supraclavicular, mediastinal, hilar, retroperitoneal, mesenteric, or inguinal areas. She was transferred to the intensive care unit (ICU) after becoming tachycardic and significant metabolic acidosis with serum lactate elevating to 5.5 mmol/L. She was eventually intubated for ventilation support. Driven by the unexplained cytopenia, a sternal bone marrow aspiration and biopsy was performed that showed RS cells. There was no evidence for HLH. The cells stained positive for CD15, CD30, PAX5, EBV, and negative for CD20 (Fig. 2). A diagnosis of HL was established. Takotsubo cardiomyopathy complicated her disease course, resulting in a drop in her ejection fraction (EF) from 55% to 23%, along with the incidence of an apical left ventricle thrombus, warranting initiation of heparin drip. She also developed bilateral pleural effusions requiring thoracentesis, which revealed the fluid to be transudative. She was negative for HIV in the blood. Treatment A modified chemotherapy regimen of cyclophosphamide (750 mg intravenous (IV) for 1 day), vincristine (2 mg IV for 1 day), and dexamethasone (40 mg orally (PO) daily for 4 days) was given. Doxorubicin was avoided due to her reduced EF. Allopurinol 300 mg oral twice daily with adequate IV fluid hydration was proceeded. She was monitored for tumor lysis syndrome with daily labs, including uric acid, phosphorus, potassium, and calcium. Her repeat echocardiogram demonstrated an increase in EF to 59%. Follow-up and outcomes A plan was made to discharge the patient after a course of cytoreductive chemotherapy and start her on AVD (doxorubicin, vinblastine, dacarbazine) + BV (brentuximab vedotin) combination in the outpatient setting. However, her medical condition deteriorated quickly. She was admitted to ICU for hypotension associated with hyponatremia. The patient died of cardiovascular arrest a few days later without receiving another cycle of chemotherapy. Discussion HL may only present as lymphadenopathy or a mediastinal mass with or without the presence of constitutional symptoms. Some patients may only present with fatigue, fever, or pruritus symptoms. Obtaining a comprehensive history, including but not limited to the onset, duration, and extent of lymphadenopathy, constitutional symptoms, and possible family history of HL is vital for appropriately differential diagnosis of HL. A thorough and extensive physical examination is essential to evaluate the extent of HL involvement. Positron emission tomography (PET)/CT scan and other image study are required to stage the HL. A tissue biopsy is the gold standard to confirm and stratify the subtype of HL. Stage IV HL is characterized by the extralymphatic organs involvement no matter whether there is lymphatic tissue involvement. EBV and HIV infections are prevalent in geographic regions with poor socioeconomic development and serve as risk factors for HL development. However, only a small fraction of EBV infected individuals develop EBV-positive HL. About 20-50% of HL cases in North America and Europe are EBV positive. Almost all cHL cases in tropical and developing countries are EBV positive [7]. Several case report series described diagnosis of PBMHL in HIV patients. These studies demonstrated that PBMHL in HIV patients present a more aggressive disease course with significantly shorter median survival time than typical HL in HIV patients [8,9]. Ponzoni et al reported that the median survival time in HIV patients with PBMHL was 4 months, but in HIV patients with cHL it was 15 months [8]. Shah et al reported that the median age of HIV patients with PBM-HL was 36 years old, which was significantly younger than HIV-negative PBMHL patients who were older than 50 years of age [9]. The atypical clinical presentation of PBMHL, at least partially, contributes to the delay of diagnosis, increased mortality rates, and shorter survival times. There have been reports of PBMHL in the HIV-negative pediatric patients, and these cases also carry a delayed time to diagnosis and a worse prognosis [10]. Due to the relatively unusual presentation, misdiagnosis and delayed treatment of PBMHL are commonly seen. Recent literature has suggested replacing bone marrow biopsy with PET/CT scans to detect bone marrow involvement in extranodal HL or PBMHL. However, false-negative cases can occur with PET/CT. Therefore, bone marrow biopsy continues to be the gold standard to confirm the involvement of the bone marrow in HL patients with PET/CT employed for follow-up. So far, there are only seven cases of PBMHL reported (Table 1) [10-15]. As outlined above, the current first-line combination chemotherapy for HL is ABVD. But there is scarcity of guidelines for PBMHL management. Due to this patient's poor baseline status, treatment was adjusted to offer a better safety profile with cyclophosphamide, vincristine, and dexamethasone alone. A study conducted by the Nordic Lymphoma Group in elderly HL patients with comorbidities showed cyclophosphamide, doxorubicin, vincristine, and prednisone (CHOP) to be an effective regimen with less associated toxicity [11]. Conclusions This case report highlights the presence of cHL as an isolated bone marrow disorder in immunocompetent patients without HIV infection. Due to the atypical presentation of the disease, PBMHL patients are often misdiagnosed, leading to delays in management, which is further complicated by the disease's aggressive nature. Therefore, an early bone marrow biopsy is warranted in patients with fever of unknown origin, unexplained cytopenia, and a rapidly worsening clinical course. HL usually has a fairly good curative rate after treating with proper chemotherapy regimens, but the prognosis of PBMHL con-tinues to be poor. ECHELON-1 trial studying the efficacy of adding the antibody-drug conjugate brentuximab to the existing chemotherapy regimen of AVD for advanced HL showed robust and durable improvement in progression-free survival versus ABVD [12]. It is worth to try this innovative combination regimen on PBMHL. Learning points The mentioned report highlights a case of cHL involving the bone marrow primarily in an immunocompetent patient. Identifying PBMHL early can result in timely initiation of chemotherapy. Acknowledgments Photomicrographs are credited to Mary Hansen Smith MD, Department of Pathology, Banner University Medical Center, Tucson, Arizona. Financial Disclosure This study was supported by: AA&MDSIF research grant to JJP (146818), American Cancer Society grant to JJP (124171-IRG-13-043-02), JTTai&Co Foundation Cancer research grant to JJP, NIH/NCI grant to JJP (P30CA023074), and a University of Arizona Cancer Center research grant to JJP.
2022-09-30T15:19:17.754Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "ec5f73bdcabf475a5eac13b01fa976a210354659", "oa_license": "CCBYNC", "oa_url": "https://www.journalmc.org/index.php/JMC/article/download/3973/3350", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "555f7f87f41f5cbbce1cc9a2f52a1accba682e3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237253945
pes2o/s2orc
v3-fos-license
Plausible Role of Asthma Biological Modifiers in the Treatment of Eosinophilic Esophagitis Eosinophilic esophagitis (EoE) is a T-helper type-2 (Th2/T2) cell-mediated disease characterized by 15 or more eosinophils per high-powered esophageal biopsy microscopy field (eos/hpf), excluding other causes. EoE is often clinically characterized by symptoms such as dysphagia, nausea, food impaction, and chest pain that do not respond to antacids. Two-thirds of patients are unresponsive to proton pump inhibitors (PPIs). Steroids may be effective but pose long-term health risks and can lose efficacy in patients with serum eosinophilia greater than 1,500 cells/µL. Because EoE is not IgE-mediated, allergy skin testing for food may benefit a subset of patients. These therapies have shortcomings, which necessitate further investigation. Herein, we report a patient successfully treated with benralizumab (anti-IL-5Rα), demonstrating a potential solution to the lack of effective treatments for EoE. Introduction First reported in 1977, eosinophilic esophagitis (EoE) describes the hyperinflammatory eosinophilic infiltration and narrowing of the esophagus, immunologically akin to asthma [1]. Current treatments comprise proton pump inhibitors (PPIs), diet modification, corticosteroids, and esophageal dilation. However, PPIs are ineffective for nearly two-thirds of EoE patients [2]; dietary modifications are difficult to maintain [2]; and corticosteroids (viscous budesonide) may increase the risk for hypothalamic-pituitaryadrenal axis suppression, oral candidiasis, and other side effects [3]. Moreover, among the 33% of EoE patients who receive endoscopic dilation, approximately 58% of them require additional dilations, potentiating the risk for fibrostenotic progression and perforation [4]. As of March 2021, current treatments are largely ineffective, and none are labeled for EoE [5][6]. However, benralizumab may serve as an effective treatment option, as demonstrated in our case's reduction of esophageal eosinophils and symptoms. Case Presentation We administered a free sample dose of benralizumab (30 mg subcutaneous) in an off-label manner to treat a patient with EoE who did not tolerate steroids and PPI treatment. This patient preferred to avoid long-term use of steroids, with their attendant side effects. A 27-year-old woman diagnosed with EoE by her gastroenterologist was referred to us after one year of failing to respond to optimal anti-reflux measures, including pantoprazole treatment and diet modification (caffeine restriction). Moreover, a six-food elimination diet did not suffice either. Her previous gastroenterologist also placed her on fluticasone/vilanterol 100 mcg-25 mcg/inh inhalation powder one puff swallowed once a day (without a spacer). Shortly thereafter, the patient refused her treatment due to concerns about the potential consequences of consuming a steroid suspension on her singing. Furthermore, she showed no improvement on her steroid solution regimen. On her initial visit with us, the patient complained of minor dysphagia, spontaneous nausea, general discomfort in her throat, and difficulty swallowing pills. Her percutaneous skin-testing for 30 different food allergies tested negative; spirometry was normal. Her family history was unremarkable besides having a mother with multiple sclerosis and hypertension and a brother with allergies and asthma. Her blood analysis showed 400 absolute eosinophils. The patient's endoscopy showed esophageal inflammation and erythema ( Figure 1). Her initial esophageal biopsy revealed more than 15 eos/hpf, supporting the diagnosis of EoE. Five days after receiving benralizumab (upon the patient's consent), her symptoms completely subsided. One month later, her blood eosinophil count was zero. Esophageal mucosa appeared normal one month after benralizumab treatment. She was asymptomatic for 2.5 months following administration of benralizumab, but symptoms such as nausea slightly recurred post 2.5 months. Discussion The phenotype for EoE typically presents as dysphagia, upper chest pain, nausea, and/or choking from food impaction [7]. The standard first line of management for symptoms suggestive of EoE involves approximately eight weeks of PPI treatment. If symptoms continue, then EoE is a likelier diagnosis, and an esophagogastroduodenoscopy (EGD) is required to differentiate EoE from gastroesophageal reflux disease (GERD). EoE confirmation entails detecting 15 eos/hpf or more in an esophageal biopsy, while seven or less is likely GERD (Figure 2) [4]. Note: Original artwork created by Timothy Olsen BS, on September 17, 2020. Currently, four Food and Drug Administration (FDA)-approved anti-eosinophil medications for asthma exist --benralizumab, mepolizumab, reslizumab, and dupilumab. Our selection of Benralizumab was in part due to its relatively low percentage of reported adverse reactions. For example, during a 28-week clinical trial comparing benralizumab-treated asthma patients with placebo-treated asthma patients, the only statistically significant adverse reactions that occurred with either equal or more than 3% incidence comprised 8.2% experiencing headaches versus 5.3%, respectively, and 2.7% experiencing pyrexia versus 1.3%, respectively [8]. Moreover, benralizumab has a high specificity for lowering eosinophils [9]. It may thus bypass many of the traditional inefficiencies associated with current approaches by targeting the disease's underlying immunopathogenesis more directly, as demonstrated in our patient's treatment. According to theFDA, the treatment's primary contraindication is hypersensitivity to benralizumab or any of its excipients [8]. The recommended dose and administration of benralizumab is 30 mg administered subcutaneously once every four weeks for the first three doses, followed by once every eight weeks thereafter [8]. Benralizumab has an absorption half-life of approximately 3.6 days and an estimated absolute bioavailability of about 58% following subcutaneous administration either to the thigh, abdomen, or arm (no clinical difference in relative bioavailabilities) [8]. The medication is a humanized monoclonal antibody (IgG1/κ-class) sourced from Chinese hamster ovary cells with recombinant DNA technology [8]. As a monoclonal antibody, benralizumab reduces eosinophils through two unique mechanisms: one, antagonistically blocking the alpha moiety of eosinophils' IL-5 receptors, and two, activating natural killer cells (NK-cell) for eosinophilic targeting [10]. The first mechanism competitively blocks IL-5 binding to IL-5Rα via benralizumab's Fab region, binding with a dissociation constant of 11 pM [8]. The second mechanism involves binding between benralizumab's Fc-constant region and FcɣRIII receptors on immune effector cells, such as NK-cells (dissociation constant of 45.5 nM), to cytotoxically target and destroy eosinophils ( Figure 3) [8,10]. FIGURE 3: EoE pathophysiology. Dendritic cells' activation precedes major histocompatibility complex class 2 (MHC2) antigen presentation and subsequent naive CD4+ T-cells (Th0) polarization to T-helper type-2 cells (Th2). Th2 cells then release cytokines: IL-5 (light green) and interleukin-13 (IL-13) (dark blue). IL-5 recruits eosinophils to the esophagus and stimulates eosinophilic proliferation in EoE. Benralizumab (light blue) blocks IL-5 from binding to IL-5Rα on eosinophils [10]. While benralizumab may provide a more targeted approach than today's standard treatments, it is nevertheless an off-label drug for EoE, costing $5,197.33/mL out-of-pocket [11]. In addition, while administering a single free sample was not enough to completely get rid of her symptoms, the sample plateaued symptoms at a lower level of discomfort compared to pre-treatment as per the patient. Conclusions As of 2021, there are no FDA-approved medications for EoE. The current standard of care includes dietary modification, PPIs, and swallowed corticosteroids. Nevertheless, these interventions are not always successful or tolerable. Reducing our patient's EGD below 15 eos/hpf to zero eos/hpf alleviated the patient's dysphagia, difficulty swallowing pills, and general discomfort and spontaneous nausea for one and a half months. Symptoms later returned at a lower ceiling of intensity. Future strategies may entail more precisely targeting the immunopathogenesis of EoE. This case highlights benralizumab as a potential treatment for EoE due to its specificity in repressing eosinophil activation and growth. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other
2021-08-22T05:36:45.397Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "7261f949ea1274abf426bee1e0aaab04c8ee47dd", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/62935-plausible-role-of-asthma-biological-modifiers-in-the-treatment-of-eosinophilic-esophagitis.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7261f949ea1274abf426bee1e0aaab04c8ee47dd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119620267
pes2o/s2orc
v3-fos-license
The space $\dot{\mathcal{B}}'$ of distributions vanishing at infinity - duals of tensor products Analogous to L.~Schwartz' study of the space $\mathcal{D}'(\mathcal{E})$ of semi-regular distributions we investigate the topological properties of the space $\mathcal{D}'(\dot{\mathcal{B}})$ of semi-regular vanishing distributions and give representations of its dual and of the scalar product with this dual. In order to determine the dual of the space of semi-regular vanishing distributions we generalize and modify a result of A. Grothendieck on the duals of $E \hat\otimes F$ if $E$ and $F$ are quasi-complete and $F$ is not necessarily semi-reflexive. Introduction L. Schwartz investigated, in his theory of vector-valued distributions [24,25], several subspaces of the space D ′ xy = D ′ (R n x ×R m y ) that are of the type D ′ x (E y ) where E y is a distribution space. In this paper we will be concerned with the space D ′ (Ḃ) of "semi-regular vanishing" distributions. Notation and Preliminaries. We will mostly build on notions from [26,6,24,25]. D ′ (E) is defined as D ′ ε E, which space coincides with D ′ ⊗ ε E = D ′ ⊗ π E =: D ′ ⊗E in the 3 examples above. If E, F are two locally convex spaces then E ⊗ π F , E ⊗ ε F and E ⊗ ι F denote the completion of their projective, injective, and inductive tensor product, respectively; writing u ⊗ in place of ⊗ means that we take the quasi-completion instead. The subscript β in E ⊗ β F [25, p. 12, 2°] refers to the finest locally convex topology on E ⊗F for which the canonical injection E ×F → E ⊗F is hypocontinuous with respect to bounded sets. Given a locally convex space E, E ′ b denotes its strong dual, E ′ σ its weak dual and E ′ c its dual with the topology of uniform convergence on absolutely convex compact sets. In absence of any of these designations, E ′ carries the strong dual topology. For the definition of D(E) see [24, p. 63] and [22, p. 94].Ḃ ′ is the space of distributions vanishing at infinity, i.e., the closure of E ′ in D ′ L ∞ [26, p. 200]. N(E, F ) and Co(E, F ) denote the space of nuclear and compact linear operators E → F , respectively. The normed space E U for an absolutely convex zero-neighborhood U in E is introduced in [6, Chap. I, p. 7], with associated canonical mapping Φ U : E → E U . L(E, F ) is the space of continuous linear mappings E → F . By B s (E, F ) and B(E, F ) we denote the spaces of separately continuous and continuous bilinear forms E × F → C, respectively, and B h (E, F ) is the space of separately hypocontinuous bilinear forms; for any of these spaces, the index ε denotes the topology of bi-equicontinous convergence. Motivation. In order to prove, e.g., the equivalence of S(x − y)T (y) ∈ D ′ x (D ′ L 1 ,y ) for two distributions S, T (which means, by definition, that S, T ∈ D ′ (R n ) are convolvable) to the inequality ∀K ⊂ R n compact ∃C > 0 ∃m ∈ N 0 ∀ϕ ∈ D(R 2n ) with supp ϕ ⊆ {(x, y) ∈ R 2n : x + y ∈ K} : | ϕ(x, y), S(x)T (y) | ≤ C sup it is advantagous to know of a "predual" of D ′ (D ′ In the memoir [23] L. Schwartz investigated the space D ′ (E) of semi-regular distributions. For reasons of comparison we present the main features thereof in Section 2, i.e., in Proposition 1 properties of D ′ (E), in Proposition 2 the dual and a "predual" of D ′ (E) and in Proposition 3 an explicit expression for the scalar product of K(x, y) ∈ D ′ x (E y ) with L(x, y) ∈ D x ⊗ ι E ′ y . These propositions generalize the corresponding propositions in [23] and new proofs are given. In [15] we found the condition ∀ϕ ∈ D : (ϕ * Š)T ∈Ḃ ′ for two distributions S, T , in order that (∂ j S) * T = S * (∂ j T ) under the assumption that (∂ j S, T ) and (S, ∂ j T ) are convolvable (see also [9, p. 559]). The equivalence of (1) and is proven in [16]. Due to the regularization property for a distribution S [26, remarque 3°, p. 202] we are motivated to investigate the space D ′ (Ḃ) of "semi-regular vanishing distributions" analogously to D ′ (E) in [23], i.e., • to state properties of D ′ (Ḃ) in Proposition 4, • to determine the dual of D ′ (Ḃ) in Proposition 5, • to express explicitly the scalar product in Proposition 6, and • to determine the transpose of the regularization mappingḂ ′ → D ′ y (Ḃ x ), S → S(x − y) in Proposition 7. Duals of tensor products. Looking for (D ′ (Ḃ)) ′ b we make use of the following duality result of A. Grothendieck which allows -in contrast to the corresponding propositions in [12, §45, 3.(1), p. 301, §45, 3. (5) Note that for our example in the introduction the assumption of semireflexivity is not fulfilled. Nevertheless we reach the conclusion by observing that , B c being the semireflexive space D L ∞ endowed with the topology of uniform convergence on compact subsets of D ′ L 1 [26, p. 203] which also can be described by seminorms [17,Prop. 1.3.1,p. 11]. Therefore, in Section 4, we prove a generalization of Grothendieck's Corollary and a modification which applies to semi-reflexive locally convex Hausdorff spaces F such that the completeness of (E ⊗F ) ′ b can be shown by the existence of a space F 0 such that ( There is yet another condition equivalent to (1) and (2): which is nothing else than This equivalence can be shown by the use of the "kernel identity" which we prove in Section 5 (Proposition 10 [23, p. 110]) and the barrelledness of D ′ (E) by (ii) yield the reflexivity of D ′ (E). In [23], the dual of D ′ (E) is described by the representation of its elements as finite sums of derivatives with respect to y of functions g(x, y) ∈ D x ⊗D 0 y,c It can be shown that the isomorphism is also a topological one. Other representations of the dual of D ′ (E) are given in: Proposition 2 (Dual of D ′ (E))). We have linear topological isomorphisms and linear isomorphisms For the notation D(E ′ ; ε) or D(E ′ ; β) see [25, p. 54]. Proof. The first isomorphism of (4) results from the Corollary cited in the introduction: the 5 hypotheses are fulfilled due to Proposition 1 (ii). The equality E ⊗ β F = E ⊗ ι F is a consequence of the barrelledness of E and F , in the case above: E = D, F = E ′ . The isomorphisms (5) and (6) Proposition 3 (Existence and uniqueness of the scalar product). There is one and only one scalar product which is partially continuous and coincides on ( Proof. A first proof is given in [23,Proposition 1,p. 112], by means of the explicit representation of the elements of the strong dual D ⊗ ι E ′ of D ′ (E) hinted at before Proposition 2. The uniqueness can be presumed because L. Schwartz uses the word "le produit scalaire". A third proof follows by composition of the vectorial scalar product [25, Proposition 10, p. 57] with the scalar product 3 "Semi-regular vanishing" distributions Proof. The non-semireflexivity and the semireflexivity, respectively, are consequences of Grothendieck's permanence result [6, Chap. II, §3, n°2, Proposition 13 e., p. 76] due to the corresponding properties ofḂ and B c , respectively. The sequence space representationṡ O N M is ultrabornological (as seen in the proof of Proposition 1 (ii)), nuclear, and by [6, Chap. II, §2, n°2, Théorème 9, 2°, p. 47] also its dual O To show that D ′ (B c ) is not bornological we have to find a linear form K in (D ′ ⊗B c ) * , the algebraic dual of D ′ ⊗B c , which is locally bounded (i.e., it maps bounded subsets of D ′ ⊗B c into bounded sets of complex numbers) but not continuous. Because B c is not bornological there exists T ∈ (B c ) * which is locally bounded but such that T ∈ (B c ) ′ . Fixing any ϕ 0 ∈ D with ϕ 0 (0) = 1, we define K by K(u) := T (u(ϕ 0 )) for u ∈ L(D, B c ). Then K is locally bounded but not continuous. In fact, taking a net ( Analogously to the explicit description of the elements in (D ′ (E)) ′ b , cited before Proposition 2, let us represent the elements of (D ′ (Ḃ)) ′ : The boundedness of (T ν ) ν∈N implies and maps bounded sets of D ′ into bounded sets of L 1 . This follows from the boundedness of (ϕ ν ) in D. Proposition 5 (Dual of D ′ ⊗Ḃ). We have linear topological isomorphisms because D ′ ⊗Ḃ is distinguished: the bounded sets of D ′ ⊗B c (and also of D ′ ⊗B) are contained in the weak closure of bounded sets in D ′ ⊗Ḃ. Hence, Therefore, the first isomorphism in (8) algebraically, which is a representation of the strong dual of D ′ (Ḃ) as a countable inductive limit. In fact, for K ∈ (D ′ (Ḃ)) ′ we conclude by the implication "⇒" above that ,y it suffices, due to "⇐" above, to show the implication However, this implication is a consequence of Proposition 6 (Existence and uniqueness of the scalar product). There is one and only one scalar product which is partially continuous and coincides on Proof. A first proof follows from the "Théorèmes de croisement" [25, Proposition 2, p. 18]. Remark. If K(x, y) ∈ D x ⊗ ι D ′ L 1 ,y has the representation We find a third expression for the scalar product by means of vector-valued multiplication and integration: x ⊗Ḃ y then the scalar product , x , y (Proposition 6) can also be expressed as wherein ·x ·y denotes the vectorial multiplicative product Proof. The vectorial multiplicative product ·x ·y exists uniquely as the composition of the canonical mapping defined by the "Théorèmes de croisement" [25, Proposition 2, p. 18], and the ε-product of the two multiplications • can . Note that this vectorial multiplication coincides with that defined in [25,Proposition 32,p. 127]. Due to the uniqueness of the scalar product and the continuity of the embedding E ′ x ⊗D ′ L 1 ,y ֒→ D ′ L 1 ,xy the result follows. Proposition 7 (Existence of the regularization mapping and representation of its transpose). The regularization mappinġ is well-defined, linear, injective and continuous. Its transpose is linear, continuous and given by Proof. 3. The representation in Proposition 6 ′ yields for K(x, y) ∈ D x ⊗D ′ L 1 ,y and S(y − x) ∈ D ′ x ⊗Ḃ y : The linear change of variables Then, the multiplicative product K(v − u, v) ·u ·v (S(u) ⊗ 1(v)) is defined as the image of (K(v − u, v), S(u) ⊗ 1(v)) under the mapping It remains to prove the implication (9): a vectorial regularization property similar to [1,Proposition 15] shows that On the duals of tensor products -two complements The goal of this section is the formulation of propositions which yield, as special cases, the strong duals of the spaces D ′ ⊗D ′ L 1 and D ′ ⊗Ḃ. These spaces are the "endpoints" in the scale of reflexive spaces D ′ ⊗D ′ L p and D ′ ⊗D L q , 1 < p, q < ∞, the duals of which can be determined by the Corollaire [6, Chap. II, §4, n°1, Lemme 9, Corollaire, p. 90] cited in the introduction. Proposition 8 (Dual of a completed tensor product). Let H = lim − →k H k be the strict inductive limit of nuclear Fréchet spaces H k and F the strong dual of a distinguished Fréchet space. Then Proof. By [25,Prop. 22,p. 103] we have algebraically due to the reflexivity of H and due to the fact that a linear and continuous map T : F → H is bounded if and only if there exists k and a 0-neighborhood U in F such that T maps into H k and T (U) ⊆ H k is bounded. Because F is the strong dual of a distinguished Fréchet space, [25, p. 98, b)]. Hence, [6, Chap. I, p. 75]). All together, The strong dual topology on (H ′ b (F )) ′ is finer than the topology of uniform convergence on products of bounded subsets of [5, p. 64], and that the inductive limit of the Fréchet spaces By [25,Prop. 22,p. 103] this is equivalent with saying that bounded subsets of (2) If the space F is the strong dual of a reflexive Fréchet space then H ′ b ⊗F is reflexive too, i.e., One has to show that S m and E m are distinguished. Proposition 9 (Dual of a completed tensor product). Let H be a Hausdorff, quasicomplete, nuclear, locally convex space with the strict approximation property, F a quasicomplete, semireflexive, locally convex space. Let F 0 be a locally convex space such that The proof is an immediate consequence of Proposition 10. The semireflexivity is a consequence of [6, Corollaire 2, p. 118]. Remark. (1) A. Grothendieck's hypotheses " H complete, F complete and semireflexive" are weakened by the assumption of quasicompleteness at the expense of the additional hypothesis of the strict approximation property for H. The completeness of the strong dual (H ⊗F ) ′ b is implied by the existence of an additional space F 0 with the corresponding property. (2) By checking the hypotheses of Proposition 9 we have shown in Proposition 5 that Two other applications to concrete distribution spaces are: The proof of Proposition 9 rests on a generalization of Grothendieck's Corollary on duality (cf. [6, Chap. II, §4, n°1, Lemme 9, Corollaire, p. 90]) which we prove now. Proposition 10 (Duals of tensor products). Hypothesis 1: Let E be a nuclear, F a locally convex space. Then: , where A and B are absolutely convex, weakly closed, equicontinuous subsets of E ′ and F ′ , respectively. If, in addition, we have Hypothesis 2: E is quasicomplete and has the strict approximation property and F is quasicomplete, then we obtain: If, in addition, we have Hypothesis 3: F is semireflexive, then we obtain: Proof. We shall modify the proof of Grothendieck and give more details. (i) If u ∈ B(E, F ) then the nuclearity of E implies the existence of zeroneighborhoods U in E and V in F and of sequences ( and The series in (11) · e U · f V due to Lemma 11 and inequality (10). Next let us describe in detail the canonical mapping E ′ A ⊗ π F ′ B → B(E, F ) as the composition of the following three mappings (12) is the continuous extension of the linear map on (E U ) ′ ⊗ (E V ) ′ corresponding to the continuous bilinear map The third mapping in (12) is given as the transpose of The image of u 0 in B(E, F ) coincides with u: denoting the image of u 0 in all spaces appearing in (12) by u 0 we obtain by going from right to left in the composition above: e, e ′ n f, f ′ n = u(e, f ). (ii) We also have a canonical mapping is injective because the zero-neighborhood U can be chosen in such a manner that E U is a Hilbert space [6, Chap. II, §2, n°1, Lemme 3, p. 37], and, therefore, E U is reflexive and has the approximation property. Hence, the vanishing of u 0 on E U × F V implies u 0 = 0 and a fortiori u = 0. (iv) If, in addition, Hypothesis 2 is fulfilled, we obtain by Lemma 12 below that E u The mapping j is defined as the injection The mapping m is the injection of the space B h (E ′ c , F ′ c ) of hypocontinuous bilinear forms on E ′ c × F ′ c into the space B s (E ′ , F ′ ) of separately continuous bilinear forms on E ′ × F ′ . Furthermore, we have canonical isomorphisms On the one hand, we consider on B(E, F ) the relative topology t ι with respect to the embedding j. Because the topology of E ′ u ⊗ ι F ′ is the topology of uniform convergence on equicontinuous subsets of (E ′ u ⊗ ι F ′ ) ′ = B s (E ′ , F ′ ) and equicontinuous subsets of this dual space are precisely the equicontinuous subsets of B s (E ′ , F ′ ) [6, Chap. I, §3, n°1, Proposition 13, p. 73] we see that a zero-neighborhoodbase of t ι is given by the sets j −1 (ℓ(B ′ ) • ) with separately equicontinuous subsets B ′ of B s (E ′ , F ′ ). Let us now show that t ι is finer than t b ; in particular, that for a given bounded set B in E u ⊗ π F the set B ′ := m(k(B)) is separately equicontinuous and that , for fixed f ′ ∈ F ′ , is bounded on all equicontinuous sets U • , U an absolutely convex, closed zero-neighborhood in E, i.e., ∀U ∃λ U > 0 such that ) is equicontinuous on F ′ b for e ′ ∈ E ′ fixed. Therefore, B ′ := m(k(B)) is a separately equicontinuous subset of B s (E ′ , F ′ ) which implies that (ℓ(B ′ )) • is a zero-neighborhood in t ι , which together with j −1 (ℓ(B ′ ) • ) = B • will imply t b ≤ t ι . In order to prove j −1 (ℓ(B ′ ) • ) = B • for a given bounded subset B ⊂ E u ⊗ π F with B ′ = m(k(B)) we write down the involved mappings explicitly. Let u ∈ B(E, F ) with j(u) = ∞ i=1 e ′ n ⊗ f ′ n and z ∈ E u ⊗ π F . Then, j(u), ℓ(m(k(z))) 1 is the definition of j, 2 follows from the continuity of ℓ(m(k(z))). 3 , 4 and 5 are the definitions of ℓ, m and k, respectively. The equality 6 is a consequence of the equality for z in the strictly dense subspace E ⊗ F of (vi) follows from (E u ⊗ π F ) ′ = B(E, F i.e., | e, e ′ | ≤ e U e ′ U • . Lemma 12. By assuming the hypotheses 1 and 2 on E and F we have Proof. The nuclearity of E implies that E u ⊗ π F = E u ⊗ ε F . By the quasicompleteness of E and F and the strict approximation property of E, we see that for locally compact topological spaces M and N [6, Chap. I, p. 90]. Thus, it seems to us of its own interest to state an analogue decomposition ofḂ ′ xy , besides its applicability in proving the equivalence (1) ⇐⇒ (3) hinted at in the introduction.
2016-04-11T09:05:53.000Z
2016-04-11T00:00:00.000
{ "year": 2018, "sha1": "379a8b3ac3309022ac8345695ce674c38bfaacfb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "379a8b3ac3309022ac8345695ce674c38bfaacfb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
53387031
pes2o/s2orc
v3-fos-license
Shear tests evaluation and numerical modelling of shear behaviour of reinforced concrete beams A series of 24 reinforced concrete beams – amongst them six prestressed – were tested in the laboratory of the Department of Mechanics, Materials and Structures, TUB in 2005. The principal aim of the research program was to develop the variable strut inclination method used for shear design of reinforced concrete beams by Eurocode 2. Based on numerical evaluation of a substitutive vaulted lattice model with compressed-sheared top chord and some experiences of the test results, the author proposes to increase the fraction of the shear force attributed to the concrete in simple supported reinforced concrete beams loaded by a uniformly distributed load. Attention is drawn to the importance of adequate anchorage of the horizontal component of the diagonal concrete compression force at the supports. Acknowledgement I am very much obliged to Mr László Polgár technical director of the ASA Building Construction Company for his considerable help in joining the research project and supplying the test beams from their prefabrication plant in Hódmezővásárhely, 180 km from Budapest.Without the financial support of the Research Development Competition and Research Application Bureau of the Hungarian Ministry of Education testing of beams could not have been realized.Many thanks to Ottó Sebestyén who carried out an enormous amount of work with the preparation of the test beams.I would also like to express my acknowledgement to my colleagues University Professor László Kollár, member of the Hungarian Acadamy of Sciences, Professor Emeritus Endre Dulácska, Associate Professor István Sajtos, Assistant Professor István Hamza and Electrical Engineer Miklós Kálló for their numerous advice. Introduction Within the framework of a three year research project financed by the Hungarian Ministry of Education and in cooperation with the ASA Building Construction Company, a series of 24 reinforced concrete beams -amongst them six prestressed -were tested in the laboratory of the Department of Mechanics, Materials and Structures, TUB in 2005.The principal aim of the investigation was formulated by the author [1] by emphasizing the need for more detailed information about preconditions of the application of the variable strut inclination method for shear design of reinforced concrete beams outlined by Eurocode 2 (EC2) [2].Improper anchorage of the tensile reinforcement at extreme supports may cause premature failure if little strut inclination angle is presumed by the designer at the shear design of the beam.4 m and 7.6 m span beams were tested, subjected to a uniformly distributed load.The principal variable parameters investigated were stirrup spacing and anchorage conditions.Results of the test evaluation and proposals for the development of the variable strut inclination method will be treated by the author. Test program 2.1 Characteristics of the test beams Six series of 4 beams each were tested.The T-shaped cross section had a 500 mm depth, a 500 mm flange width and a 160 mm web width.The strength grade of the concrete was about C30/37, compression strength was tested on cubes before testing of the beam.Steel B60.50, Fp100-1770 prestressing strands and BHB55.50 used for stirrups were applied.Beam type A (12 pieces) had 4.25 m total length and were elastically supported along 250 mm at both ends.The support length of beam type B (8 pieces) was 100 mm and had the same effective length of 4.00 m as beam type A. The l eff /d rate of these beams was about 10.There were four variants (denoted by K1 to K4) of link spacing investigated: 50, 100, 200 and 300 mm (see Fig. 1). Tension reinforcement was 4 × 3Ø16 bars.By one 4 beams-series of B-type beams 9Ø16 bars were substituted by 2×3Fp100 prestressing strands.Three variants of the longitudinal nonprestressed reinforcement were tested: full length bars, The load application method Uniformly distributed load was applied through water pressure created in a fibre reinforced PVC sack.The width of the sack had to be increased to produce the maximum ultimate load intensity of about 320 kN/m by making use of water-system pressure of 4.2 at.Steel frame and timber boards were used as pressure transmission devices between water sack, beam and steel frame of the loading equipment respectively.Load intensity was controlled through an 8 channel amplifier, by measuring the support reaction forces.Load was applied in steps of about 1/10 of the ultimate loading.Mean duration of one test was four hours. Measurements Concrete deformations were measured using manual deformeters on 2×3 8-point rosettes of 60 mm diameter.At midspan, contraction of concrete near to the extreme compres-sion fibre was also registered.Signals of strain gages stuck to 2×3 links near to the supports and to the centre bottom Ø16 bar at midspan and along its anchorage length were recorded by a computer controlled scanner system.Retraction of the tension reinforcement at the extremities and the deflection at midspan were also electronically controlled by the 8 channel amplifier.End rotations and opening of some major cracks were measured manually.Crack patterns were drawn by using different colour pens at higher load intensities. Control calculations for different compression strut inclinations 3.1 Control calculations Statistical evaluation of the concrete compression strength was carried out based on rupture test results measured on 5 pieces of 150 mm cubes.Characteristic and design values of resistance forces corresponding to four different failure modes were computed: flexural failure (M R ), compression failure of concrete due to shear (V R,max ), tension failure of the shear reinforcement (V Rs ), slip of tension reinforcement at beam end (F Rs ).The load intensity at failure is in three cases of these four failure modes depending on the compression strut inclination angle θ.Characteristic and design load capacities cor- responding to the different failure modes were determined for each test beam and for compression strut inclination angles θ varying between 21.6˚and 45˚according to formulae and expressions given in [2].For prestressed beams prestress loss due to elastic deformation, relaxation, shrinkage and creep were determined by respecting the technology and type of products -7-wire strands, cement class R -used, the concrete strength, ambient conditions and the duration of time that elapsed between fabrication and testing. Strut inclination angle and maximum load capacity The calculated design load capacity corresponding to the mode of failure reached first was determined for different compression strut inclination angles θ between 21.6˚and 45˚.From among these values the maximum design load capacity and the corresponding compression strut angle was considered and compared with the rupture load intensity and mode of failure at test.Results for beams type A can be seen in Table 1.The definition of the beam code letters are: A: 250 mm support length, N1 to N3: three variants of tension reinforcement of normal, nonprestressed beams, as given in 2.1, K1 to K4: four variants of stirrups as given in 2.1.17 of the 24 test beams failed for shear.Calculated and real failure modes were in some cases different.The characteristic difference was that according to measured strain gauge data reaching of the maximum capacity load was followed by sudden bound failure of the tension reinforcement at beam extremity. Although excessive opening of shear cracks seemed to testify yield of the stirrups, strain gauge signals did not always support this and rupture of the stirrups did not occur in any cases. Rates of measured and calculated load capacities Rates of rupture load per calculated design load capacity are given in Table 1 above for beams type A. It can be observed that the highest rates were obtained for cases when the calculated design load capacity corresponded to tension failure of links due to shear.This demonstrates that design resistance expressed by V Rds determined according to [2] is too conservative.The regions of beams near supports are subjected to the highest shear forces, but due to the diagonal introduction of the concentrated support reaction force and -at extreme supports -the introduction of the equilibrating internal tension force component in the bottom reinforcement results in a local situation which should be handled in a way somewhat different from the parallel chord lattice model of Mörsch.In D-regions near beam extremities the diagonal compression acting in the concrete contributes to the equilibration of shear forces.This is the main reason why the shear capacity V Rds is underestimated.In the very same cases the real reason for failure is the bound failure along the anchorage length of the tensile reinforcement, a problem which should be handled with more care, and which is in direct relation with the compression strut inclination angle.The proposed model for D-regions at beam extremities is treated in Section 5. Load levels corresponding to serviceability limit states Load levels corresponding to reaching serviceability limit state situations of crack opening and deflection were also registered, but their presentation is outside of the scope of this paper. Vaulted lattice model with compressed-sheared top chord of reinforced concrete beams 5.1 The vaulted lattice model In the following, simply supported reinforced concrete beams will be investigated with a constant concrete section, loaded with a uniformly distributed load and supplied with vertical stirrups.Supporting the tied arch model-creation idea mentioned by Walther (1956), Polónyi (1996), Schlaich (1998), the author will propose certain refinements on the parallel chord lattice model of Mörsch.The essence of the proposal is to consider the line of action of the resultant of top chord compression stresses of reinforced concrete beams -the so called compression line -to be the compression chord axis of the lattice model of Mörsch.This compression line is arched and intersects the horizontal bottom chord axis -the axis of the tension reinforcement -above the theoretical support point under an angle θ A . The vaulted compression line can approximately be determined.When applying the vaulted lattice model for shear design, it is to be emphasized that the vertical component of the concrete compression force acting along the compression line can be considered as part of the shear capacity, which, just at the maximum of the actual shear force is significantly reducing the shear force fraction to be equilibrated by the shear reinforcement. where V Rd,γ is part of the design shear capacity due to the vertical component of the concrete compression force, N cV and N cH are components of the force developing in the concrete compression chord, γ (x) is the direction angle of the compression line at distance x. The shear capacity fraction attributed to the concrete should be limited from above.We accept -and take into consideration in the numerical examples below -that is the design value of the shear capacity fraction attributed to the concrete and should not be greater than the design value of the greatest actual shear force, limited by fracture of the inclined concrete compression struts according to EC2. The proposed modified lattice model will really be regarded as one with variable strut inclination angle along the beam axis.The strut inclination angle at the point of intersection of the interior support face and of the bottom plane of the member will be assumed to be equal to θ as given in EC2, where it is regarded constant.Although the direction coordinate x is measured from the left support A parallel to the beam axis, but variation of the strut inclination angle θ(x) will be interpreted along the compression line, because compression strut forces branch off from the compression line.The strut inclination angle θ(x 1 )=θ, because the strut with inclination angle θ branches off from the point (x 1 , z x=x1 ) of the compression line, and intersects the bottom plane of the member just at the interior edge of the support (see Fig. 4) which means that The coordinate x 1 is determined by numerical approximation when determining the compression line.Along the left half beam axis two sections as given below will be distinguished.The section 0 ≤ x ≤ x 1 can be characterized by fan-wise spreading compression forces in the concrete.The top corner of the beam end does not play a significant role in transmitting the support reaction force, so that it can even be cut down by a 45 o diagonal plane above the bottom reinforcement (see Fig. 5).As an approximation, the strut inclination θ A at x=0 can be considered equal to the arithmetic mean value of 45 o and θ: Along section 0 ≤ x ≤ x 1 the strut inclination angle will approximately be regarded linearly variable: The section x 1 ≤ x ≤ 0.5l can be characterized by variable strut inclination angles also because of cracks getting steeper in direction of the centre of span.This variation will also be approximated by linear function between θ and 45 o : On Fig. 5 the variation of θ(x) is shown along the left half beam axis using data of one of the numerical examples.The variation expresses, that the shear force fraction that can be transmitted by cracking friction is decreasing in the direction of the interior of the span. Determination of the compression line Points of the compression chord axis of the vaulted lattice model indicated on Fig. 5 are lying on a curved line that is joining tangentially to two given points under given direction angles.Their determination was through a series of tangents of the curve at densely lying points along the axis of the beam, using numerical methods.The function of the curve could also be given analytically, but its shape can better be controlled by numerical determination. Shear force fraction transmitted by the concrete compression zone along the central part of the beam In case of higher l/d slenderness ratios consideration of the vertical component of the internal compression force acting along the compression line -as part of the shear capacity -becomes insignificant for even very low values of θ along the internal fraction of the beam.On the other hand it is reasonable to take into consideration a limited fraction of the great compression force acting in the concrete compression chord along this section of the beam axis, as a contribution of the compressed concrete to the shear capacity, although this is not contained by the EC2. The earlier Hungarian reinforced concrete standard MSz 15022-71 (1971) prescribed in this respect 10% of the compression chord force as part of the shear capacity.The failure condition of the compressed-sheared concrete, based on test experiences (Szalai (1988)) [9] is given by the expression below: where f ck and f ct,k are the characteristic values of the concrete uniaxial compression and tensile strengths respectively, σ c is the compression stress in the compressed-sheared concrete at failure, τ ck is the characteristic value of the shear strength of the compressed-sheared concrete. When considering shear strength τ ck equal to 10% of the compression strength f ck , the compression strength will be -according to (7) -decreasing by the same extent.Numerical investigations proved that exploiting the shear strength of compressed concrete to this extent along the central part of a beam loaded by a uniformly distributed load, is sufficient for the beam to be safe against shear with minimum stirrups.The 10% reduction of the flexural-compression strength of the concrete at approximately one quarter of the span will have a relatively small influence on the necessary cross-sectional dimensions and quantity of the tension reinforcement, when compared with the positive effect that limited shear strength exploitation will have on the quantity of shear reinforcement.Parametric investigation of this problem will naturally be needed.Our model proposal can then be completed: the vaulted lattice model will be combined by 0.1 f ck shear strength exploitation of the concrete compression chord along the central part of the beam. Along the arched section of the compression line, where the vertical component of the concrete compression force results in higher contribution to the equilibration of shear than the above exploitation rate of shear strength, there is no need and is not even reasonable to take this effect into consideration.The constant direction changes of the concrete principal stresses along this section of the beam are namely taking place because the concrete supports significant shear, and this is the reason for the reduction of the concrete compression strength by the effectiveness factor ν = 0.6 • (1 − f ck/250) when determining V Rd,max .Accordingly, there is no need to reduce the value of V Rd,max as given in EC2 by a further reduction factor.The previous two ways of considering the concrete compression force due to flexure in the top chord of the beam by determining the shear capacity can pass over to each other by respecting the greater one from among the two values: shear fraction of horizontal compression and vertical component of diagonal compression: Here, in the index (γ +sh) γ relates to the inclination angle of the compression line and sh to shear strength of the compressed concrete. The rupture polygon of the vaulted lattice model Sides of the rupture polygon are perpendicular to the compression line and parallel to the direction θ(x) (see Fig. 6).The N cV (x) can then be determined by Eq. ( 1).The shear force fraction V Ed,s to be equilibrated by the stirrups can be determined from the equilibrium of vertical forces: Checking of the beam end by application of the vaulted lattice model The embedment length of the longitudinal reinforcement is determined by supposing 45 o as approximation of the primary crack angle at the internal support face: The pull-out force of the tension reinforcement can be determined from equilibrium of horizontal forces: Here, x 1 is the x-coordinate of the compression line point, from which the internal edge of the support can be seen under an angle Fig. 7. Anchorage check at the beam end θ=θ EC2 : Here: Corresponding to the proposal, a tension force F Ed,s should be anchored by the longitudinal reinforcement along the length l s , which can be determined from the moment equilibrium condition concerning the rupture polygon.The point of investigation -the point along the compression line with x = x 1 -can be determined by step-by-step calculation, using the numerically determined value of the z(x) compression line ordinate.Our numerical investigation resulted in the approximate value of the steel pull-out force F Ed,s , as indicated below: that is at about 10% greater than the horizontal component of the inclined concrete compression force intersecting the axis of the tension reinforcement under the angle θ A above the support point.It is anyhow a more safe value than F td given in EC2 (2005, (6.18)) as the additional tensile force developing in the longitudinal reinforcement due to shear: Here, V Ed,r ed is the shear force at distance d from the internal face of the support.F td was namely determined by considering moment equilibrium condition of the parallel chord truss with effective depth z ≈ 0.9d.At the end of the beam this effective depth is questionable and because the force F Ed,s is proportional with 1/z, the force determined by (15) seems to be underestimated.In the numerical examples the force F Ed,s was determined by ( 14). Transformation of numerical results obtained by use of the vaulted lattice model for practical applications As the capacity of the shear reinforcement is in linear relationship with both the internal lever arm z and the cotangent of the compression strut inclination angle θ, and according to our model proposal both of these parameters are variable along the beam axis, the fraction of the shear force that is to be equilibrated by the shear reinforcement according to (10) should be transformed in order to be comparable to the actual shear force of EC2 or to its greatest value V Ed,red respectively: where the parameters in the denominator are those of the vaulted model and both rate-multiplicators are greater than 1.Values of V arched Ed,s,tr (x) can then be treated as actual shear forces for design of the shear reinforcement according to EC2. As the actual shear force is from V Ed,r ed in direction of the centre of the span monotone decreasing, the relationship can be considered as a safe quota of V Ed,r ed which is transmitted to the supports by the arch effect and through the shear resistance of the compressed concrete of the reinforced concrete beam.By taking into consideration the favourable effect of the vaulted lattice model with compressed-sheared top chord the shear reinforcement can be designed for the force where Otherwise the design procedure of the EC2 can be followed in all respects with one only exception.The exception concerns the value of the pull-out force to be anchored by the tension reinforcement at the beam end, which is to be determined by (14).Proposal for the value of α cn will be given after evaluation of the numerical examples.In Fig. 8 shear force diagrams V Ed , V Ed,EC2 , V arched Ed,s , V arched Ed,s,tr and V arched Ed,EC2 are shown for one of the numerical examples. Shear force diagrams V Ed : design value of the actual shear force, V Ed,max : design value of the actual shear force at support A, V Ed,EC2 (or V Ed,r ed ): design value of the actual shear force according to Eurocode 2, V arched Ed,s : design value of the actual shear force to be equilibrated by the shear reinforcement according to the vaulted lattice model with compressed-sheared top chord, V arched Ed,s,tr : transposed design value of the actual shear force to be equilibrated by the shear reinforcement according to the vaulted lattice model with compressed-sheared top chord, and V arched Ed,s,EC2 : proposed design value of the actual shear force to be equilibrated by the shear reinforcement according to Eurocode 2, determined by taking into consideration the vaulted lattice model with compressed-sheared top chord. Numerical examples 6.1 Characteristics of the investigated beams The results of two series of numerical examples will be shown below.Emphasis will be laid on the designed shear reinforcement, the shear capacity fraction attributed to the concrete, the way of anchorage of the internal horizontal force at the support and value of the quota α cn .Calculations were made according to the vaulted lattice model and prescriptions of EC2. In one of the two series of examples monolithic beams, in the other prefabricated beams were analyzed respectively, both simple supported, with data corresponding to the needs of the construction practice.The two series of beams differ mainly in geometry: support length of monolithic beams was 250 mm, that of prefabricated beams 150 mm l/d slenderness ratio of monolithic beams ranged from 14 to 18, that of prefabricated beams from 18 to 22. Intensity of the uniformly distributed load was adopted so that the support reaction force was for all examples equal to 0.8V Rd,max . For the value of the compression strut inclination angle θ as defined by EC2, 45 o , 37.5 o , 30 o and 21.6 o were adopted.The value of θ A was then determined according to (4). Characteristics of monolithic beams: concrete C30/37, reinforcement B60.50, vertical links Ø8, straight longitudinal reinforcement Ø16, 30 cm web thickness, 20 mm minimum concrete cover, 25 cm support length.The internal level arm z was a variable parameter between 200 and 500 mm in steps of 75 mm.The effective depth was determined by the approximation z= 0,9d.To each value of z one theoretical span was ordered so that members of the series of beams would uniformly be distributed along the slenderness domain 14 ≤ l/d ≤ 18, characteristic for monolithic reinforced concrete beams (l eff = 4.0, 5.0, 6.0, 7.0 and 8.0 m). Characteristics of prefabricated reinforced concrete beams: concrete C40/50, reinforcement B60.50, vertical links Ø8, straight longitudinal reinforcement Ø16, 16 cm web thickness, 20 mm minimum concrete cover, 15 cm support length.The internal level arm z was variable parameter between 300 and 700 mm in steps of 100 mm.The effective depth was determined by the approximation z= 0.9d.To each value of z one theoretical span was ordered so that members of the series of beams would uniformly be distributed along the slenderness domain 18 ≤ l/d ≤ 22, characteristic for prefabricated reinforced concrete beams (l eff = 7.2, 9.0, 10.5, 12.0 and 14.4 m). Results and evaluation Results The most important results of the numerical examples were arranged in 2×3 tables, which are available on the home page szt.bme.huunder munkatársak/oktatók és doktoranduszok/Draskóczy/. One table was made for θ=21.8 o , 30 o and 45 o Then, the shear force fractions attributed to the concrete are given, the number of links according to (1), saving of links in case of the arched model, expressed in %, when compared with results of ( 2) and ( 3) respectively.The rate F Ed,s /F Rd,s in the last but one row gives the fraction of the bottom reinforcement designed for moment, which is to be lead up to and anchored at the end of the beam, to equilibrate the horizontal component of the inclined concrete compression force.The force to be anchored back was determined according to (14).Then two numbers give the surplus of the shear force fraction supported by the concrete according to the vaulted model, when compared with the two kinds of EC2 analysis.Finally, in the last line, the safe value of the quota α cn was given according to (17) for each of the numerical examples. The given stirrup spacings are multiples of 25 mm and satisfy with only one exception the construction rules given in EC2: in case the spacing resulted in 25 mm -for better overview of the results -the diameter of the stirrups was not increased. Results evaluation In Table 2 intervals of the quota α c are given as obtained in a series of the numerical examples.Values for θ= 21.8˚are definitely smaller.This is the consequence of the increase of the rate factor z EC2 /z(x), due to the little lever arm z(x) near the support in case of the vaulted model.For greater values of θ the miniimum value of α c will be obtained -as mentioned earlier -at approximately the quarter point of the span, and will only be little under 0.25.It is a numerical proof for that compression strut inclination angles θ smaller than 30˚have little advantage, because beside anchorage problems of the bottom bars, the arching effect can scarcely be exploited.Based on results of the numerical investigation above the following modifications are proposed for the design of shear reinforcement (vertical stirrups) of reinforced concrete beams, loaded predominantly by uniformly distributed load: The condition V Ed,max ≤ V Rd,max should always be fulfilled.If V Ed,red > V Rd,c , the shear reinforcement (vertical stirrups) must be designed.In this case: The pull-out force F Ed,s at the beam end should be determined by ( 14), the compression strut inclination angle θ A at the support point by (4). Values of V Rd,c , V Rd,max and V Rd,s will all be determined according to EC2. Evaluation of test results by application of the vaulted model Conclusions 1 For each of the three groups of beams it can be observed that with increasing spacing of stirrups (see Fig. 1) the maximum load-bearing capacity will be reached generally by decreasing strut inclination angles, and that through the vaulted model the strut inclination angle θ at failure is somewhat higher.From these tendencies the following conclusions can be drawn: a) by decreasing shear reinforcement intensity beams tend to resist by reaching smaller strut inclination angles; b) in case of the vaulted model higher resistance load intensity at greater strut inclination angle can be determined. 2 For beam type A the calculation according to EC2 results in failure of stirrups for every second beam, whereas by application of the vaulted model only for one of the 12 beams which is in better accordance with the real failure modes at tests. 3 The rate p u / p Rd is by application of the vaulted model -for beam type A -smaller or at most equal to the rate determined according to EC2 calculations, and is nearer to the desirable value of approximately 1.5. Summary of conclusions Based on beam tests results and results of numerical examples obtained by applying a vaulted D-region lattice model proposal, the author proposes the use of about 30˚compression strut inclination angles at extreme supports of reinforced concrete beams, loaded predominantly by uniformly distributed load, which results in about 25% less transverse reinforcement intensity and -because of end anchorage problems -some increase of the longitudinal bottom reinforcement at the beam end.This kind of change fits well to present technological demands.Further test investigation is needed to elaborate constructional rules for design practice. Tab. 1 . Capacities and failure modes of test beams Fig. 5 . Fig. 5.The variation of the strut inclination angle θ (x) and visualization of the variation along the left half of a beam for θ =21,8 o Fig. 8 . Fig. 8. Diagrams of shear forces to be equilibrated by the shear reinforcement of one of the numerical examples Tab. 2. Intervals of the quota α cn for the investigated series of numerical examples θ (θ A ) Comparison of the results of EC2 calculations, tests and arched model calculations Here α cn = 0.25 if 30 o ≤ θ ≤ 45 o where (19): V Rd,cn = α cn V Ed,red Per.Pol.Arch.Tab.3.
2018-10-23T00:19:39.867Z
2009-12-03T00:00:00.000
{ "year": 2009, "sha1": "a89c701a0b1ff0ad36b604693e05af3ac768a148", "oa_license": "CCBY", "oa_url": "https://pp.bme.hu/ar/article/download/16/16", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a89c701a0b1ff0ad36b604693e05af3ac768a148", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Geology" ] }
31227366
pes2o/s2orc
v3-fos-license
Antenatal bleeding: Case definition and guidelines for data collection, analysis, and presentation of immunization safety data http://dx.doi.org/10.1016/j.vaccine.2017.01.081 0264-410X/ 2017 Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). ⇑ Corresponding author. E-mail address: mprabhu@partners.org (M. Prabhu). 1 Brighton Collaboration homepage: http://www.brightoncollaboration.org. 2 Present address: University of Washington, Seattle, USA. Malavika Prabhu a,⇑, Linda O. Eckert , Michael Belfort , Isaac Babarinsa , Cande V. Ananth , Robert M. Silver , Elizabeth Stringer , Lee Meller , Jay King , Richard Hayman , Sonali Kochhar, Laura Riley , for The Brighton Collaboration Antenatal Bleeding Working Group 1 Preamble 1.1. Need for developing case definitions, and guidelines for data collection, analysis, and presentation for antenatal bleeding as an adverse event Bleeding in the second and third trimesters of pregnancy affects 6% of all pregnancies, and has distinct etiologies from first-trimester bleeding [1]. In the vast majority of cases, antenatal bleeding is vaginal and obvious; however, rarely, it may be contained within the uterine cavity, the intraperitoneal space, or the retroperitoneal space. The etiologies of antenatal bleeding, also referred to as antepartum hemorrhage, are heterogeneous. In cases of severe antepartum hemorrhage, complications include preterm delivery, cesarean delivery, blood transfusion, coagulopathy, hemodynamic instability, multi-organ failure, salpingectomy/oophorectomy, peripartum hysterectomy, and in some cases, either perinatal or maternal death. The goal of this Working Group was two-fold: (1) to define sources of pathologic antenatal bleeding in the second or third trimester of pregnancy that are directly attributable to pregnancy and are either common and/or catastrophic; (2) to define each source of antenatal bleeding for the purposes of future case ascertainment. The charge to the Brighton Collaboration Working Groups to define various adverse obstetric and pediatric events includes an aim to more easily identify immunization-related adverse events. In the case of antenatal bleeding, our Working Group felt strongly that there is no biologic plausibility or mechanistic explanation linking immunizations to antenatal bleeding. Moreover, as immunizations and antenatal bleeding are common occurrences in the course of any individual pregnancy, it is quite likely that these events will co-occur without suggesting causation. To date, there is one case report of antenatal bleeding occurring in a pregnancy where a tetanus, diphtheria, and acellular pertussis vaccination was also administered [2]. However, the definition used to identify the antenatal bleeding event is not clearly presented. Standardized definitions across trials, surveillance systems, or clinical settings will facilitate case ascertainment and analysis of potential risk factors for antenatal bleeding. In this document, we focus on placenta previa, morbidly adherent placentation, vasa previa, placental abruption, cesarean scar pregnancy, intra-abdominal pregnancy, and uterine rupture as important sources of antenatal bleeding. Cesarean scar pregnancy and intra-abdominal pregnancy are rarely listed as causes of antenatal bleeding in the second and third trimester. Nonetheless, we included these causes as they are more likely to result in late presentation with a high risk of heavy maternal bleeding in settings in which ultrasound diagnosis of pregnancy is limited or unavailable. Another common source of bleeding is labor, whether at term or preterm. Although preterm labor is pathologic and addressed in another document [3], bleeding in the context of labor alone is not. This is not addressed in our document. Non-obstetric genital tract bleeding may also occur during pregnancy, including neoplastic, infectious, traumatic, or iatrogenic causes. Urinary tract infections or hemorrhoids may also be misidentified as antenatal bleeding until additional workup is performed. This document will focus solely on the pregnancy-attributable etiologies of antenatal bleeding. 1.2. Methods for the development of the case definition, and guidelines for data collection, analysis, and presentation for antenatal bleeding as an adverse event Following the process described in the overview paper [4] as well as on the Brighton Collaboration Website http://www. brightoncollaboration.org/internet/en/index/process.html, the Brighton Collaboration Antenatal Bleeding Working Group was formed in 2016 and includes members with a diverse background in clinical experience, location of practice, and scientific expertise in sources of antenatal bleeding. The composition of the working and reference group as well as results of the web-based survey completed by the reference group with subsequent discussions in the working group can be viewed at: http://www.brightoncollaboration.org/internet/en/index/working_groups.html. To guide decision-making for case definitions, a literature search was performed in PubMed, including the following terms: pregnancy, antenatal bleeding, antepartum bleeding, antepartum hemorrhage, placenta previa, vasa previa, abruptio placenta, placenta accreta, morbidly adherent placenta, abdominal pregnancy, cesarean scar pregnancy, uterine rupture, abdominal pregnancy, intra-abdominal pregnancy, and vaccination. Major obstetric textbooks and published guidelines from major obstetric societies throughout the world were also surveyed. This review resulted in a detailed summary of 33 articles used to establish case definitions for antenatal bleeding. The search also resulted in the identification of 1 reference containing information regarding vaccination administration and antenatal bleeding (as defined by the listed PubMed search terms above). Description of sources of antenatal bleeding We first begin with a brief description of each etiology, the underlying pathophysiology, incidence, and risk factors. For most conditions, incidence data are derived from settings in which the condition has been most systematically studied, often North America and Western Europe. Incidence data not derived from these areas is specified in the following paragraphs. Placenta previa Placenta previa occurs when the placenta partially or completely overlies the internal cervical os. This is in contrast with low-lying placenta, in which the placenta lies within 2 cm of the internal cervical os but does not extend across it. The etiology of placenta previa is unknown. Risk factors include smoking, advanced maternal age, multiparity, in vitro fertilization, multiple gestation, Asian race, prior endometrial damage, prior pregnancy termination or spontaneous abortion, prior cesarean delivery, and prior placenta previa [1,5,6]. These risk factors suggest that the pathogenesis may be driven by endometrial damage or suboptimal endometrial perfusion in other areas of the uterus. The incidence of placenta previa at term is approximately 1 in 200 pregnancies; the incidence is higher earlier in gestation, but many placenta previas resolve as the lower uterine segment develops and the placenta preferentially expands towards more vascularized areas of the uterus [1,5]. Morbidly adherent placentation Morbidly adherent placentation occurs when the placenta implants abnormally into the uterine myometrium, rather than the normal implantation of the placenta into the uterine decidua basalis [1,5,7]. Invasive placentation occurs as a result of the absence of the decidua basalis and incomplete development of or injury to Nitabuch's layer [1,5,8]. The incidence of morbidly adherent placentation is 1 in 300 to 1 in 500 pregnancies [5]. The most significant risk factor is placenta previa in the context of one or more prior cesarean deliveries, or other uterine surgery. With one prior cesarean delivery and a placenta previa, the risk is 11%; with 3 or more cesarean deliveries and a placenta previa, the risk is greater than 60% [9]. Other common risk factors include advanced maternal age, advanced parity, cesarean scar pregnancy, and in vitro fertilization [5,7,[10][11][12]. Vasa previa Vasa previa occurs when fetal blood vessels course within the amniotic membranes across the internal cervical os or within 2 cm of the os. Type I vasa previa occurs with a velamentous umbilical cord insertion into the membranes, consequently allowing for fetal vessels to run free within the membranes between the umbilical cord and placenta. Type II vasa previa occurs with the development of a succenturiate placental lobe and main placental lobe, connected by fetal vessels that freely course within the membranes. Vasa previa is rare, with an incidence of 1 in 2500 deliveries. Risk factors include resolved low-lying placenta, placenta previa, and multiple gestation [5,30]. Cesarean scar pregnancy A cesarean scar pregnancy is an ectopic pregnancy implanted in a previous cesarean (hysterotomy) scar, surrounded by myometrium and connective tissue. This occurs due to a small defect in the cesarean scar, as a result of poor healing and poor vascularization of the lower uterine segment with resultant fibrosis [31]. The pathophysiology of cesarean scar pregnancies is similar to an intrauterine pregnancy with morbidly adherent placentation [32]. Cesarean scar pregnancies occur in about 1 in 2000 pregnancies and account for 6% of ectopic pregnancies among women with a prior cesarean delivery [31]. As the recognition of cesarean scar pregnancies is relatively recent, risk factors are not yet clear; however, as with morbidly adherent placentation, the incidence appears to correlate with the number of prior cesarean deliveries [32]. Intra-abdominal pregnancy Intra-abdominal pregnancy is a rare form of an ectopic pregnancy, in which a pregnancy implants into the peritoneal cavity or abdominal organs. Most commonly, this occurs due to tubal ectopic pregnancy with tubal extrusion or rupture and secondary implantation; primary implantation into the peritoneal cavity is also possible. Pregnancies may be asymptomatic, or may present with life-threatening intra-abdominal hemorrhage. The incidence is difficult to ascertain, as data are derived from case reports, but is reported to be 1-2 in 10,000. Risk factors are artificial insemination, in vitro fertilization, uterine surgeries, and prior tubal or cornual pregnancy [33,34]. Uterine rupture Uterine rupture is the complete nonsurgical disruption of all layers of the uterus. Uterine rupture may occur either in an unscarred uterus or at the site of a prior hysterotomy scar. The incidence of rupture of the unscarred uterus is approximately 1 in 20,000 deliveries in high-resource settings, but can be as high as 1 in 100 deliveries in low-resource settings, where the majority of this type of rupture occurs [35][36][37]. Risk factors for uterine rupture in an unscarred uterus include a contracted pelvis, prolonged dystotic labor, multiparity, morbidly adherent placentation, malpresentation, use of strong uterotonic drugs perhaps with cephalopelvic disproportion, operative vaginal deliveries at high station, and congenital weakness of the myometrium [35]. In highresource settings, uterine rupture most commonly occurs in the context of a prior hysterotomy scar or transfundal surgery [37]. The incidence of this event ranges from approximately 1 in 200 up to 1 in 10, depending on the type of hysterotomy and the use of labor augmentation [38,39]. Additional risk factors include the number of prior cesarean deliveries, interdelivery interval less than 18 months, one-layer uterine closure, and open fetal surgery [40][41][42]. The number of signs, symptoms, and diagnostic tests that will be documented for each case may vary considerably. The case definition has been formulated such that the Level 1 definition is highly specific for the condition. As maximum specificity normally implies a loss of sensitivity, an additional diagnostic level has been included in the definition to increase sensitivity while retaining an acceptable level of specificity. In this way, it is hoped that all possible cases of antenatal bleeding can be systematically captured. The grading of definition levels is about diagnostic certainty, not clinical severity of an event. Thus, a clinically very severe event may appropriately be classified as Level 2 or 3 rather than Level 1 if it could reasonably be of an alternative etiology -either another cause of antenatal bleeding, or unrelated to antenatal bleeding entirely. Detailed information about the severity of the event should always be recorded, as specified by the data collection guidelines [43]. 1.4.2. Rationale for individual criteria or decision made related to the case definitions 1.4.2.1. Pathology findings. In certain cases, pathologic findings serve as the gold standard to confirm the presence of a pathologic entity. This is the case for morbidly adherent placentation, where the surgical specimen is often the hysterectomy specimen with placenta in-situ. Pathologic findings for cesarean scar pregnancy managed by hysterectomy with the gestational sac in-situ is also the gold standard for diagnosis; however, a hysterectomy is not always performed, and histologic confirmation may not be possible. Histological findings identify many but not all cases of placental abruption. The other etiologies of antenatal bleeding included within this document do not lend themselves to a histologic diagnosis. Laboratory findings. No specific laboratory findings were included in case definitions of antenatal bleeding, as none of these clinical entities are associated with specific or identifiable laboratory parameters. Anemia and coagulopathy associated with significant antenatal bleeding are to be diagnosed and managed using usual clinical algorithms. Radiology findings. Ultrasound findings in a pregnancy complicated by antenatal bleeding are highly important in identifying and differentiating several conditions and thus are included in many case definitions. MRI findings may be used in some circumstances when this modality is available. See below regarding safety data. Safety of imaging in pregnancy Prenatal ultrasound uses sound waves passing through an acoustic window to visualize deeper tissue and structures, including a fetus. Ultrasound is considered safe in pregnancy, and there have been no reports of adverse fetal or neonatal outcomes from prenatal ultrasound imaging. Applying the ALARA (As Low As Reasonably Achievable) principle is recommended during diagnostic imaging procedures [44,45]. Magnetic resonance imaging (MRI) technology has also been used in pregnancy for several indications after inconclusive or nondiagnostic prenatal ultrasound. There has never been any documented fetal or neonatal harm and the procedure is considered safe in pregnancy. While the quality of imaging may be superior with gadolinium-enhanced imaging, its use is not currently recommended in pregnancy due to theoretical harms. Nonetheless, clear harm from gadolinium has not been demonstrated [45]. Timing of adverse event with relation to timing of immunization As noted in the preamble of this document, both immunizations and antenatal bleeding are common events in pregnancy. We feel strongly there is no current evidence or biological plausibility to suggest a causal link between immunization and antenatal bleeding. In order to appropriately assess this question, pregnant women who are and are not exposed to immunizations would need to be prospectively studied to identify any association with antenatal bleeding. However, withholding immunizations in pregnancy would not be ethical, and thus we are left with case reports and other epidemiologic studies of association that may lead to inappropriate conclusions. Differentiation from other associated disorders As previously discussed, the focus of this Working Group is to define pathologic primary causes of antenatal bleeding. Labor, whether at term or preterm, may present with vaginal bleeding, yet in this instance, the pathway of preterm labor is the primary pathologic event. The Brighton Working Group on Pathways of Preterm Birth describes the pathophysiology of preterm labor in detail [3]. Guidelines for data collection, analysis and presentation As mentioned in the overview paper, the case definition is accompanied by guidelines that are structured according to the steps of conducting a clinical trial, i.e. data collection, analysis and presentation. Case definitions and guidelines are not intended to guide or establish criteria for management of ill infants, children, or adults. Both were developed to improve data comparability. Periodic review Similar to all Brighton Collaboration case definitions and guidelines, review of the definition with its guidelines is planned on a regular basis (i.e. every three to five years) or more often if needed. For all levels of diagnostic certainty Antenatal bleeding is a clinical syndrome characterized by bleeding in the second or third trimester of pregnancy. Pathologic etiologies attributable to the pregnant state include placenta previa, morbidly adherent placenta, vasa previa, placental abruption, cesarean scar pregnancy, intra-abdominal pregnancy, and uterine rupture. For both levels of diagnostic certainty for each etiology of antenatal bleeding: The patient is determined to be in the second or third trimester of pregnancy (refer to Brighton Working Group document to establish dating in pregnancy [46]). Bleeding is either documented vaginally or suspected to be occurring intrauterine, intraperitoneally, or (rarely) retroperitoneally, based on clinical signs and symptoms. In the case of ultrasound-based diagnosis, transvaginal ultrasound is more specific than transabdominal ultrasound, and transvaginal ultrasound is recommended where available. For each definition, the diagnostic levels reflect diagnostic certainty and must not be misunderstood as reflecting different grades of clinical severity. Moreover, defining levels of clinical severity of antenatal bleeding is beyond the scope of this document. Second-or third-trimester ultrasound or MRI evidence of placenta previa, AND one of the following ultrasound features noted in Table 1, AND one of the risk factors as noted in Table 2. Placenta previa OR Morbidly adherent placentation found on histology in a hysterectomy or partial wedge resection specimen. Level 2 -There are two definitions of equal specificity. Ultrasound evidence of placenta previa, AND hypervascularity at the site of the uteroplacental interface, diagnosed at laparotomy. OR Difficulty with placental separation after delivery of the infant, at either a vaginal or cesarean delivery with resultant hemorrhage due to partial separation. Vasa previa Level 1 -Second trimester ultrasound evidence of fetal vessels (vessel with fetal heart rate identified by color flow Doppler) running through the membranes and overlying the internal cervical os, AND post-delivery examination of the placental specimen with unsupported fetal vessels within the membranes. Level 2 -Vaginal bleeding in the second or third trimester at the time of ruptured amniotic membranes, AND fetal heart rate changes ultimately resulting in sinusoidal rhythm/terminal bradycardia, AND delivery of a pale, anemic infant or recent stillbirth or neonatal death [48], AND post-delivery examination of the placental specimen with unsupported fetal vessels within the membranes. Cesarean scar pregnancy Level 1 -There are two definitions of equal specificity. Transvaginal ultrasound with the following characteristics: empty uterine cavity, AND empty cervical canal, without contact with the gestational sac, AND presence of gestational sac, +/À fetal pole, +/À cardiac activity, in the anterior uterine segment adjacent to the cesarean scar, AND absence or defect in myometrium between bladder and gestational sac, AND gestational sac well perfused on Doppler ultrasound (to differentiate from an expulsing, avascular gestational sac). OR Hysterectomy specimen with evidence of pregnancy implanted into the cesarean scar. There is no Level 2 definition for this condition. Intra-abdominal pregnancy Level 1 -At laparotomy, a fetus found within the abdominal cavity, without evidence of uterine rupture, and with placentation not within the uterine cavity. There is no Level 2 definition for this condition. Uterine rupture Level 1 -Complete uterine disruption at the time of laparotomy in the context of vaginal or intra-abdominal bleeding. There is no Level 2 definition for this condition. Guidelines for data collection, analysis and presentation of antenatal bleeding It was the consensus of the Brighton Collaboration Antenatal Bleeding Working Group to recommend the following guidelines to enable meaningful and standardized collection, analysis, and presentation of information about antenatal bleeding. However, implementation of all guidelines might not be possible in all settings. The availability of information may vary depending upon resources, geographical region, and whether the source of information is a prospective clinical trial, post-marketing surveillance or epidemiological study, or an individual report of antenatal bleeding. Also, as explained in more detail in the overview paper in this volume, these are intended as guidelines and are not to be considered a mandatory requirement for data collection, analysis, or presentation. Data collection These guidelines represent a desirable standard for the collection of available data following immunization to allow for comparability of data, and are recommended as an addition to data collected for the specific study question and setting. They are not intended to guide the primary reporting of antenatal bleeding for a surveillance system or study monitor. Investigators developing a data collection tool based on these data collection guidelines also need to refer to the criteria in the case definition, which are not repeated in these guidelines. Guidelines numbers below have been developed to address data elements for the collection of adverse event information as specified in general drug safety guidelines by the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use [49], and the form for reporting of drug adverse events by the Council for International Organizations of Medical Sciences [50]. These data elements include an identifiable reporter and patient, one or more prior immunizations, and a detailed description of the adverse event of antenatal bleeding. Additional guidelines have been developed as direction for the collection of additional information to allow for a more comprehensive understanding of antenatal bleeding [4,43]. Source of information/reporter For all cases and/or all study participants, as appropriate, the following information should be recorded: (1) Date of report. (2) Name and contact information of person reporting 3 and/or diagnosing the antenatal bleeding as specified by countryspecific data protection law. (5) Case/study participant identifiers (e.g. first name initial followed by last name initial) or code (or in accordance with country-specific data protection laws). (6) Date of birth, age, and sex. (7) For infants: Gestational age and birth weight. Clinical and immunization history. For all cases and/or all study participants, as appropriate, the following information should be recorded: (8) Past medical history, including hospitalizations, underlying diseases/disorders, pre-immunization signs and symptoms including identification of indicators for, or the absence of, a history of allergy to vaccines, vaccine components or medications; food allergy; allergic rhinitis; eczema; asthma. (9) Any medication history (other than treatment for the event described) prior to, during, and after immunization including prescription and non-prescription medication as well as medication or treatment with long half-life or long-term effect. (e.g. immunoglobulins, blood transfusion and immunosuppressants). (10) Immunization history (i.e. previous immunizations and any adverse event following immunization (AEFI)), in particular occurrence of antenatal bleeding after a previous immunization. Details of the immunization For all cases and/or all study participants, as appropriate, the following information should be recorded: If this is not feasible due to the study design, the study periods during which safety data are being collected should be clearly defined. 4 The date and/or time of onset is defined as the time post immunization, when the first sign or symptom indicative for antenatal bleeding occurred. This may only be possible to determine in retrospect. 5 The date and/or time of first observation of the first sign or symptom indicative for antenatal bleeding can be used if date/time of onset is not known. 6 The date of diagnosis of an episode is the day post immunization when the event met the case definition at any level. 7 The end of an episode is defined as the time the event no longer meets the case definition at the lowest level of the definition. 8 E.g. recovery to pre-immunization health status, spontaneous resolution, therapeutic intervention, persistence of the event, sequelae, death. 9 An AEFI is defined as serious by international standards if it meets one or more of the following criteria: (1) it results in death, (2) is life-threatening, (3) it requires inpatient hospitalization or results in prolongation of existing hospitalization, (4) results in persistent or significant disability/incapacity, (5) is a congenital anomaly/ birth defect, (6) is a medically important event or reaction. Data analysis The following guidelines represent a desirable standard for analysis of data on antenatal bleeding to allow for comparability of data, and are recommended as an addition to data analyzed for the specific study question and setting. (31) Reported events should be classified in one of the following five categories including the two levels of diagnostic certainty. Events that meet the case definition should be classified according to the levels of diagnostic certainty as specified in the case definition. Events that do not meet the case definition should be classified in the additional categories for analysis. Event classification in 5 categories 10 Event meets case definition (1) Level 1: Criteria as specified in the Antenatal Bleeding case definitions. (2) Level 2: Criteria as specified in the Antenatal Bleeding case definitions. Event does not meet case definition Additional categories for analysis (3) Reported antenatal bleeding with insufficient evidence to meet the case definition. 11 (4) Not a case of antenatal bleeding. 12 (32) The interval between immunization and reported antenatal bleeding (separated out by etiology) could be defined as the date/time of immunization to the date/time of onset4 of the first symptoms and/or signs consistent with the definition. If few cases are reported, the concrete time course could be analyzed for each; for a large number of cases, data can be analyzed in the following increments: Subjects with Antenatal Bleeding (specify which etiology) by Interval to Presentation Interval Number <24 h after immunization 24 h to <7 days after immunization 7 days to <30 days after immunization 30 days or greater after immunization TOTAL These intervals were arbitrarily chosen, as there is no biologic plausibility between vaccination and antenatal bleeding, as we have previously explained. We caution that all episodes of bleeding that occur temporally after vaccination may not be causally linked. For example, pre-existing conditions in pregnancy (i.e., history of abdominal trauma, abnormal placentation, abnormal pregnancy implantation) may predispose a patient to an event of antenatal bleeding, with the administration of a vaccination temporally along the pathophysiologic process to bleeding without any relation. In addition, we recommend recording both the gestational age at the time of immunization, and the gestational age at the time of the bleeding event. Please refer to the Brighton Collaboration document on establishing gestational age. (33) The duration of a possible antenatal bleeding event could be analyzed as the interval between the date/time of onset3 of the first symptoms and/or signs consistent with the definition and the end of episode7 and/or final outcome.8 Whatever start and ending are used, they should be used consistently within and across study groups. (34) If more than one measurement of a particular criterion is taken and recorded, the value corresponding to the greatest magnitude of the adverse experience could be used as the basis for analysis. Analysis may also include other characteristics like qualitative patterns of criteria defining the event. (35) The distribution of data (as numerator and denominator data) could be analyzed in predefined increments (e.g. measured values, times), where applicable. Increments specified above should be used. When only a small number of cases is presented, the respective values or time course can be presented individually. (36) Data on antenatal bleeding obtained from subjects receiving a vaccine should be compared with those obtained from an appropriately selected and documented control group(s) to assess background rates of hypersensitivity in non-exposed populations, and should be analyzed by study arm and dose where possible, e.g. in prospective clinical trials. Data presentation These guidelines represent a desirable standard for the presentation and publication of data on antenatal bleeding that occurs in a pregnancy in which immunizations are also administered to allow for comparability of data, and are recommended as an addition to data presented for the specific study question and setting. Additionally, it is recommended to refer to existing general guidelines for the presentation and publication of randomized controlled trials, systematic reviews, and meta-analyzes of observational studies in epidemiology (e.g. statements of Consolidated Standards of Reporting Trials (CONSORT), of Improving the quality of reports of meta-analyzes of randomized controlled trials (QUORUM), and of meta-analysis Of Observational Studies in Epidemiology (MOOSE), respectively). (37) All reported events of antenatal bleeding should be presented according to the categories listed in guideline 31 and 32. (38) Data on possible antenatal bleeding events should be presented in accordance with data collection guidelines 1-24 (verify numbers) and data analysis guidelines 31-36 (verify numbers). (39) Terms to describe antenatal bleeding such as ''low-grade", ''mild", ''moderate", ''high", ''severe" or ''significant" are highly subjective, prone to wide interpretation, and should be avoided, unless clearly defined. 10 To determine the appropriate category, the user should first establish, whether a reported event meets the criteria for the lowest applicable level of diagnostic certainty, e.g. Level three. If the lowest applicable level of diagnostic certainty of the definition is met, and there is evidence that the criteria of the next higher level of diagnostic certainty are met, the event should be classified in the next category. This approach should be continued until the highest level of diagnostic certainty for a given event could be determined. Major criteria can be used to satisfy the requirement of minor criteria. If the lowest level of the case definition is not met, it should be ruled out that any of the higher levels of diagnostic certainty are met and the event should be classified in additional categories four or five. 11 If the evidence available for an event is insufficient because information is missing, such an event should be categorized as ''Reported antenatal bleeding with insufficient evidence to meet the case definition". 12 An event does not meet the case definition if investigation reveals a negative finding of a necessary criterion (necessary condition) for diagnosis. Such an event should be rejected and classified as ''Not a case of antenatal bleeding". (40) Data should be presented with numerator and denominator (n/N) (and not only in percentages), if available. Although immunization safety surveillance systems denominator data are usually not readily available, attempts should be made to identify approximate denominators. The source of the denominator data should be reported and calculations of estimates be described (e.g. manufacturer data like total doses distributed, reporting through Ministry of Health, coverage/population based data, etc.). (41) The incidence of cases in the study population should be presented and clearly identified as such in the text. (42) If the distribution of data is skewed, median and range are usually the more appropriate statistical descriptors than a mean. However, the mean and standard deviation should also be provided. (43) Any publication of data on antenatal bleeding should include a detailed description of the methods used for data collection and analysis as possible. It is essential to specify: The study design. The method, frequency and duration of monitoring for antenatal bleeding. The trial profile, indicating participant flow during a study including drop-outs and withdrawals to indicate the size and nature of the respective groups under investigation. The type of surveillance (e.g. passive or active surveillance). The characteristics of the surveillance system (e.g. population served, mode of report solicitation). The search strategy in surveillance databases. Comparison group(s), if used for analysis. The instrument of data collection (e.g. -questionnaire, diary card, report form). Whether the day of immunization was considered ''day one" or ''day zero" in the analysis. Whether the date of onset4 and/or the date of first obser-vation5 and/or the date of diagnosis6 was used for analysis. Use of this case definition for antenatal bleeding, in the abstract or methods section of a publication. 13 Disclaimer The findings, opinions and assertions contained in this consensus document are those of the individual scientific professional members of the working group. They do not necessarily represent the official positions of each participant's organization.
2018-04-03T06:13:39.763Z
2017-12-04T00:00:00.000
{ "year": 2017, "sha1": "b0fc9c136d4718bacaee6dda7eced79824d15967", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.vaccine.2017.01.081", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b0fc9c136d4718bacaee6dda7eced79824d15967", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261679972
pes2o/s2orc
v3-fos-license
A comparative study of Chinese medicine quality of life assessment scale (CQ-11D) and EQ-5D-5L and SF-6D scales based on Chinese population Purpose To measure health-related quality of life in the Chinese population using three universal health utility scales (CQ-11D, EQ-5D-5L, and SF-6D) and to compare the differences in the results obtained by the different scales to provide a reference for future utility on health-related quality of life in the Chinese population. Methods According to the Chinese population's distribution area, gender, and age, quota sampling was conducted. Three scales, CQ-11D, EQ-5D-5L, and SF-6D, whose results were self-reported, were collected in succession after collecting respondents' demographic information. The health utility value and floor/ceiling effect were explained. Bland–Altman was used to evaluate the consistency, the intraclass correlation coefficient was used to evaluate the correlation, and the receiver operating characteristic curve was used to evaluate the discriminative validity of the scale. Results The mean utility values of the CQ-11D, EQ-5D-5L, and SF-6D scales, respectively, were 0.891, 0.927, and 0.841. The floor effect did not appear in any of the three scales, but the ceiling effect did, and the EQ-5D-5L ceiling effect was the most severe. The limits of the agreement interval for CQ-11D versus EQ-5D-5L in the total sample population were (-0.245,0.172); for CQ-11D versus SF-6D, they were (− 0.256,0.354); and for EQ-5D-5L versus SF-6D, they were (-0.199,0.371). The consistency of the three scales is satisfactory overall. In the total population, the intraclass correlation coefficient between CQ-11D and EQ-5D-5L was 0.709, while EQ-5D-5L and SF-6D were 0.0.565 and that between EQ-5D-5L and SF-6D was 0.472. According to the subject operation curve results, the area under the curve for the total sample population of CQ-11D was 0.746, EQ-5D-5L was 0.669, and SF-6D was 0.734. Conclusion The CQ-11D is inferior to the EQ-5D-5L, but superior to the SF-6D. There is a strong correlation between the health utility values of the total population as measured by the three scales and those of the healthy population. The CQ-11D scale is the most sensitive to differences between populations and diseases. Supplementary Information The online version contains supplementary material available at 10.1007/s11136-023-03512-z. Health-related quality of life (HRQoL) is a subjective concept that reflects the quality of life and measures people's physical, psychological, social, and spiritual personal role functions [1].As the world's most widely utilized universal health utility scales, the EQ-5D and SF-6D can calculate quality-adjusted life years (QALYs).Cost-utility analysis (CUA) provides evidence-based support for the economic evaluation of health interventions. Relevant studies have demonstrated the existence of a ceiling or floor effect in the measurement of HRQoL for scales EQ-5D-5L and SF-6D [2][3][4].However, the ceiling effect of scale EQ-5D-5L has been reduced relative to scale EQ-5D-3L; nevertheless, it still exists [5].In addition, these scales may need to accurately reflect the health preferences and characteristics of the Chinese population in terms of selection, emphasis, and description of health status due to research and development based on foreign populations.In recent years, the scale health status description system has placed a greater emphasis on the participation of the general population in the construction process, and the results based on measurements of the general population have become more extrapolated and universal [6]. In addition, due to the differences in dimensions and levels of different scales, there may be differences in measurement results when applied to different populations; therefore, it is crucial to conduct head-to-head comparisons of different utility measurement instruments. In empirical studies, EQ-5D and SF-6D are predominantly used to measure HRQoL, and there needs to be more research based on Chinese population characteristics or the perspective of traditional Chinese medicine.Consequently, this study utilizes the utility-scale evaluation scale of quality of life in Chinese medicine (CQ-11D), which is based on the research and development of the Chinese population, the theory of traditional Chinese medicine, and the health concept of traditional Chinese medicine, as well as the globally prevalent EQ-5D-5L and SF-6D [7].The study will further compare the distribution, correlation and consistency of the three scales on the health utility measurement results of the general population in China, and conduct analysis in combination with the influencing factors, aiming to explore the differences in the measurement results of the three scales, provide an empirical basis for the comparative study of utility scales developed based on the Chinese population and provide reference for the improvement of HRQOL in Chinese general population and the selection of appropriate quality of life assessment tools for researchers. Research objects This study covers the period from January 2022 to December 2022 and is based on research conducted by Chinese citizens across the nation.Through the seven investigated geographic divisions, each partition of 2 ~ 6 selected representatives of provinces, autonomous regions, and municipalities directly under the central government, based on prior research experience and the area, gender, and age distribution of quota sampling of the general Chinese population [8].The local area of the investigator is chosen for the interview survey.The researcher conducts research through a one-onone or face-to-face questionnaire survey and searches for interviewees through encounter sampling in the public area (e.g., street, community, school) within the jurisdiction of the local area [9,10].The inclusion criteria were as follows: (1) participants must be at least 16 years old; (2) they must be Chinese citizens with Chinese nationality; (3) they must have resided in mainland China for the past five years; (4) they must have comprehended the research's background information and agreed to participate.The exclusion criteria for respondents were as follows: (1) had difficulties in listening, speaking, reading, or writing or could not comprehend the survey content; and (2) had a mental disorder. All investigations were conducted with the informed consent of the subjects and with Ethics Committee approval.The survey was conducted with questionnaires, and investigators questioned interviewees with appropriate training.The basis for determining the group of chronic patients is that the respondents report having chronic diseases confirmed by doctors.In addition, the respondents' sociodemographic characteristics (age, gender, income, smoking, drinking, and physical activity) were collected, and their health status was subsequently determined using the EQ-5D-5L, SF-6D, and CQ-11D scales. Background and purpose of developing the CQ-11D scale Related studies have shown that EQ-5D-5L scales and SF-6D scales have ceiling or floor effect in measuring HRQOL [11][12][13].More importantly, although these scales have good reliability and validity and are widely used, they are based on scales developed by foreign population.The selection, focus and description of health status may not accurately reflect the health preferences and characteristics of the Chinese population [6], especially the international universal quality of life scale can not fully reflect the characteristics of health output of traditional Chinese medicine intervention.The health output of traditional Chinese medicine intervention is insufficient or even underestimated.From the point of view of quality of life and patients, CQ-11D aims to develop a quality of life scale based on Chinese population, which can objectively evaluate the reported outcome of patients with traditional Chinese medicine intervention.By following the development procedure of the international scale, based on the relevant concepts of the quality of life of the World Health Organization (WHO), according to the basic contents of the quality of life, referring to the relevant contents of the foreign universal scale, on the basis of the theory and concept of health of traditional Chinese medicine, combined with the characteristics of traditional Chinese medicine intervention and Chinese culture, consult experts in the field of traditional Chinese medicine, scale and quality of life.Construct the theoretical framework of the quality of life evaluation scale based on the theory of traditional Chinese medicine, and then define the nature and applicable population of the scale according to the purpose of the development of the scale.based on the patient report outcome, a quality of life evaluation scale based on traditional Chinese medicine theory was developed for the evaluation of health quality of life of people who received intervention of traditional Chinese medicine. Construction of CQ-11D scale health utility score system The health utility score system of the Chinese medicine Quality of life-11Dimensions (CQ-11D) is based on the health preference of the Chinese population, which is constructed by using the discrete choice experiment (DCETTO) with survival time, and used in conjunction with the corresponding TCM quality of life scale to calculate the subjects' health utility value.The study was designed to recruit at least 2400 respondents across mainland China to complete one-to-one, face-to-face questionnaire surveys.A total of 2,586 people were invited to participate in the survey and 2498 valid questionnaires were completed (a completion rate of 96.60%).The conditional logit model was ultimately selected to construct a health utility scoring system for CQ-11D utility measurement.The measurable health utility value range was − 0.868 ~ 1 [7]. The utility values integral system Zhu Wentao and Luo N created for the Chinese population was adopted for the CQ-11D and EQ-5D-5L scales, respectively.However, because the utility point system based on the Chinese population still needed to be developed, the utility value integral system based on the Hong Kong population was adopted for scale SF-6D [10,14,15]. Dimensional comparison of CQ-11D, EQ-5D-5L and SF-6D CQ-11D contains 11 items: movement and self-care, appetite, stool, quality of sleep, spirit (being alive, energetic, and focused), dizziness (feeling dizzy in the mind, with eyes closed for minor cases, or spinning in front of the scene in serious cases, inability to stand), palpitations (feeling restless), pain, fatigue, irritability, anxiety (worried, anxious, nervous, restless), and depression (frustrated, lack of interest in doing things, no fun, low energy).Mobility, selfcare, daily activities, pain and discomfort, and anxiety or depression are the five dimensions of the EQ-5D-5L.Each dimension has five levels: no problems, slight problems, moderate problems, severe problems, and extreme problems [16].SF-6D comprises 6 dimensions: physical function, role limitation, social function, pain, mental health, and vitality, each dimension has four to six levels. The three scales differed in the number of dimensions and measured health status.The CQ-11D scale has 11 entries, each entry has 4 levels, and can measure 4 11 (ie 4,194,304) health states.The EQ-5D-5L scale has 5 dimensions, each with 5 levels, and can measure 5 5 (ie 3,125) health states.The SF-6D scale has 6 dimensions, each with 4 to 6 levels, and can measure 18,000 health states.The three scales have certain similarities in terms of movement, pain, and mental health, but the items of CQ-11D, such as energy and energy, bowel movements, sleep quality, and appetite, are not included in the other two scales. Quality control After the investigation has been completed, the survey members of the research group must review the data and eliminate the data obtained if the investigator fails to follow the Investigation Manual.This can ensure the investigation's quality. Data analysis Four distinct analyses were performed on the collected data: descriptive analysis, health utility distribution and ceiling/floor effect, consistency and correlation analysis, and a scale sensitivity study.Since the health utility measurement obtained through the three-scale utility value integral system can be used directly for quantitative comparison of measurement results, the health utility value was chosen as the primary analysis index in this study. In the descriptive analysis, frequency (proportion) was used to describe the categorical variables, and the histogram was used to observe the distribution characteristics of the health utility values across the three scales.In the analysis of ceiling and floor effects, it is generally accepted that more than 15 percent of the dimension or total score reaching the highest or lowest score will be considered to have a ceiling or floor effect on the dimension or total score [17].The intraclass correlation coefficient and the Bland-Altman method were used to examine correlation and consistency, and the study index was the health utility value of the three scales.Utilizing receiver operating characteristic (ROC) curves, the ability to distinguish the four subgroups specified by the VAS based on varying scores on different scales was demonstrated (0-65, 66-85, 86-95, and 96-100) [18].The ROC curve provides information regarding the scale score's sensitivity and specificity (health utility).The area under the curve (AUC) was measured between 0.5 (undifferentiated accuracy) and 1.0 (perfect accuracy).The greater the value of AUC, the higher the differentiation accuracy [19]. Statistical methods P < 0.05 was considered statistically significant.SAS 9.2 was used for descriptive analysis, SPSS 26 was used to draw the subject working curve, R language software was used to calculate the intra-group correlation coefficient, and histograms and a Bland-Altman chart were produced. Sample research quota situation and distribution of research cities The survey was conducted according to seven geographical regions in China, with 3-7 representative provinces, autonomous regions and municipalities directly under the Central Government selected in each region.Based on the existing research experience and according to the distribution area, gender and age distribution of the general population in China, the quota sampling was carried out [8,20], The survey area covers all major cities in seven sub-regions of China, covering seven geographical divisions of North China, Northeast China, East China, Central China, South China, Southwest China and Northwest China, totaling 37 provinces, cities, autonomous regions, municipalities directly under the Central Government and special administrative regions.The area where the investigator was located was selected for interview survey.Investigators looked for interviewees in the public areas (streets, communities, schools, etc.) within the jurisdiction of the area where they are located.The general representative population in China was investigated in the form of one-to-one and face-toface questionnaire, as shown in Table 1 below.The survey was carried out from February to November 2022, including three different survey parts.5000 questionnaires were allocated according to age and gender, During the research period, the team recruited a total of 196investigators, interviewed a total of 5018 respondents, and the number of effective interviews was 5000, as shown in Table 2 below. Sociodemographic characteristics This study examined 5000 members of the general population, including 2281 males and 2719 females ranging in age from 16 to 80.The specific distribution is shown in Table 3; the means (median) of the utility of scales CQ-11D, Distribution of measurements and ceiling/floor effects Figure 1 demonstrates that none of the three measurement techniques conform to the normal distribution and that the overall utility value is high.The measured utility values for CQ-11D ranged from -0.301 to 1.The histogram revealed that data continuity was satisfactory and that both medium and high utility values were represented.In the area of low efficiency, only a tiny quantity of fault data existed.The distribution of EQ-5D-5L utility value was relatively concentrated, ranging from -0.201 to 1, indicating a significant ceiling effect (49.04%).The distribution range of SF-6D is 0.036-1.The ceiling effect was observed in all three scales, with EQ-5D-5L exhibiting the highest level (19.42%),SF-6D exhibiting the lowest level (18.30%), and CQ-11D falling in between (18.64%).The floor effect was not observed on each of the three scales.The average health utility value of the healthy population is greater than that of the sick population, and the distribution results of the health utility value of the other groups are comparable to the overall sample results, which are not repeated here. Consistency and correlation test of the measurement results In this study, a Bland-Altman comparison was performed on the utility values of the three scales.Under the assumption of sampling error, the confidence intervals of the limits of agreement (Limits of Agreement, LoA) between CQ-11D and SF-6D for the five groups are wider than those for the other two groups.In the total population, more than 94.34% of samples from groups CQ-11D vs. EQ-5D-5L and CQ-11D vs. SF-6D were within the LoA confidence interval.In contrast, only 91.46% of samples from group EQ-5D-5L vs. SF-6D were found.It indicates that the measurement value of EQ-5D-5L was greater than that of SF-6D (427 samples) Tables 4, 5 and 6. Bartko first proposed the intraclass correlation coefficient (ICC) in 1966; it can be used to evaluate the consistency or reliability of different quantitative measurement methods [22].In this study, the utility values of the three scales were compared in pairs to determine their ICC.All P values were statistically significant (< 0.05) and positively correlated.From the scale, the results of all populations demonstrated the following ICC: CQ-11D VS EQ-5D-5L > CQ-11D VS SF-6D > EQ-5D-5L VS SF-6D.The correlation between CQ-11D and scale EQ-5D-5L was high, whereas the correlation between scale EQ-5D-5L and scale SF-6D was low. ROC analysis results The ROC analysis results show that the AUC of the health utility values of the three scales are higher than 0.5 in the general population, healthy population and chronic patient group.Combined with the model quality evaluation results, it is considered that the model quality of the three scales is high, and the results of the three scales are meaningful.In overall population, the discrimination of the CQ-11D scale measurement results (0.746) is better than that of the SF-6D scale (0.734) and the EQ-5D-5L scale (0.669);In healthy population, the discrimination of CQ-11D scale measurement results (0.710) is better than SF-6D scale (0.702) and EQ-5D-5L scale (0.610);In chronic patient group, the discrimination of the CQ-11D scale measurement results (0.755) is better than the SF-6D scale (0.735) and the EQ-5D-5L scale (0.704).In addition, the overall representativeness of the RoC of the three scales is good, but the sample size of the healthy population and chronic patient group is relatively small, which may have certain limitations on the ROC analysis results (Figs. 2, 3, 4 and 5). Discussion The dimension of CQ-11D scale is the most among the three scales, and it also involves the most abundant categories, and it is also the widest in the range of health utility measurement results.The larger the measurement dimension and measurement range, the more comprehensive the results of the meter measurement to a certain extent.The main results are as follows: (1) after subdividing the health utility value for different populations, this study selects the areas where health utility values gather to observe its distribution.The results also show that the CQ-11D scale has a good measurement performance in terms of the measurement range and continuity of health utility values.(2) Compared with the EQ-5D scale, CD-11D did not show obvious ceiling effect and no floor effect.(3) in previous studies, the CQ-11D scale has been proved to have a good responsiveness [21,22], and in this study, the responsiveness of CQ-11D is also well reflected.Although the dimensions of the CQ-11D scale are more than the other two scales, because the dimensions of the scale cover the basic concepts of traditional Chinese medicine and are close to life, it does not reflect the inadaptability in the survey.Therefore, the CQ-11D scale satisfies the richness of connotation and the convenience of filling in the scale to some extent.This study selected representative samples of the Chinese population and developed CQ-11D to measure HRQoL in the Chinese population.CQ-11D was then combined with EQ-5D-5L and SF-6D, two international universal scales, to compare measurement results, which can provide relevant evidence for researchers and policymakers and has specific theoretical and practical implications.The results indicated that the Chinese population's overall health utility was relatively high.The health utility value measured by respondents based on scale CQ-11D was greater than that measured by scale SF-6D and less than that measured by scale EQ-5D-5L, and there were differences in the measurement results across scales. The health attributes of the population covered by the measurement results of CQ-11D are better The measurement results show that CQ-11D can reflect better coverage attributes and sensitivity in the measurement of HRQoL in the Chinese population, the measurement scope of quality of life is broader, and the measurement results have certain advantages in the Chinese population.It is found that the utility distribution of Scale CQ-11D is continuous and wide, which can cover most The results of the measurements revealed that the floor effect did not appear on any of the three scales, whereas the ceiling effect appeared on all of them.SF-6D had the lowest ceiling effect, at 18.30%, close to the critical value.Moreover, EQ-5D-5L has the highest ceiling effect, nearly 50%.In contrast, the EQ-5D-5L scale is relative to the EQ-5D-3Lscale expansion in dimensions, and the empirical study demonstrates that the EQ-5D-5L ceiling effect relative to EQ-5D-3L has decreased [12].However, the results continue to indicate a higher ceiling effect.In healthy individuals, the ceiling effect of the Scale EQ-5D-5L was greater than 50%; however, in patients with multiple chronic diseases and the worst theoretical health status, the ceiling effect of the Scale EQ-5D-5L still exceeded the critical value of 15%. Even though there are some differences in the distribution of measurement results across different populations, they all indicate that the EQ-5D-5L scale focuses primarily on areas with high utility value.In contrast, the SF-6D scale has the narrowest distribution range.In clinical research, a value between 0 and 1 is typically employed to represent quantitative health status results.0 represents death, while 1 represents perfect health.People who are unconscious or bedridden for an extended time, accompanied by severe pain, and afflicted with a severe tumor disease may experience an adverse health effect worse than death.A patient with multiple chronic diseases, a lengthy course of medication, and combined medication may experience adverse reactions.This individual may have a poor health status.In this study, the measurement results of CQ-11D and EQ-5D-5L are negative.However, this does not imply that the measurement performance of these two scales is necessarily better than that of SF-6D, which is primarily related to the construction method of the integral utility system attached to the scale and the construction result of the final utility integral system. The measurement results of the three scales have high consistency, but there are significant differences in the correlation results The consistency of the CQ-11D and EQ-5D-5L is higher in samples from the total population, whereas the consistency of the SF-6D and other two scales is lower.It may be due to the SF-6D and other scales gap being too broad, considering the possible difference in connotation and its evaluation measurement.The EQ-5D-5L contains the primary factors influencing the quality of life with concise and well-defined dimensions.The dimensions and levels of scale SF-6D are more robust than those of scale EQ-5D-5L, which, to some extent, facilitates the incorporation of fillers into their situations.In addition, differences in the construction of the point system and the measurement process between the three scales may also contribute to the low consistency of the three measurement results. As the health of a population improves or deteriorates, three types of scale measurements result in different changes, and the results of the three types of scale measurements vary based on the state of health.The results outside the interval suggest that either the measured value of EQ-5D-5L or SF-6D is either excessively high or inadequately low.It could be caused by the following: (1) The quantity and connotation of scale dimensions and levels (items) are vastly distinct; (2) there are significant differences in expression.EQ-5D-5L, for instance, indicates the respondents' situation "on that day," whereas SF-6D indicates the respondents' situation "during the past four weeks."In the remaining groups, the results of the three scales were consistent [7,13,23]. Like the consistency measurement results, the icc demonstrated a high correlation between CQ-11D and EQ-5D-5L across entire samplings.But the correlation between the measurement results of EQ-5D-5L and scale SF-6D was less than 0.5, while the consistency results showed good consistency between the two.It could be attributed to healthy individuals' relatively calm measurement state and their insensitivity to using the intra-group correlation coefficient to measure the results.In different populations, the ICC of the patients was higher than that of the healthy people in the same group. There are differences in the performance of the three scales in the measurement of different groups of people and types of diseases The ROC curve is drawn in AUC as judgment indexes.The result indicates that in the total population, health, population, and sick population (including the risk of a single scale of chronic disease and multiple chronic diseases), the ability of CQ-11D to differentiate between different health crowd effects is superior.Furthermore, it implies that CQ-11D is superior to EQ-5D-5L and SF-6D in measuring sensitivity (differentiation) in the general population in China. It may be due to several dimensions and items in CQ-11D, particularly in dimension, which is influenced by the holistic view of traditional Chinese medicine and focuses on the overall status and feelings of the participants and is somehow more sensitive to the changes in the health status of the Chinese population [7]. In measuring patients with simple obesity, CQ-11D demonstrates superior discriminative validity and greater sensitivity than the other two scales, depending on the disease being assessed.Furthermore, hypertension and chronic gastritis, results demonstrated that the CQ-11D measurement results had a larger area under the curve and a higher sensitivity than the other two scales. When different scales are used for comparative research, the adaptation of scales to specific situations should be discussed.No single gold standard exists.Previous research has demonstrated that the Scale EQ-5D-5L is simple to comprehend and is less affected by the respondents' educational level and comprehension ability.In contrast, the Scale SF-6D performs better in the slow process of disease measurement.Therefore, it is recommended that researchers choose corresponding scales based on their research's measurement objectives and scale characteristics.Since the three scales differ significantly in dimension and level, two or more scales can be utilized in the study to reflect the health status of respondents accurately [12,24]. Boundedness There is some heterogeneity in this study: (1) The sampling method used in this study is quota sampling.Quota sampling gives investigators more rights of free investigation in each category.Although the results of many quota surveys are close to the results of Stratified sampling in probability sampling, it cannot be determined whether the sample is representative enough, and the results obtained cannot be well extrapolated to the general population of China.In future research, we will try our best to obtain survey data through probability sampling.(2) Considering the large sample size and convenience of this study, the order of the three scales was not randomly set during the research process, which may have an impact on the survey data and lead to random bias.We will consider this issue in the subsequent research process and randomly set the order of the three scales.At the same time, we explained this issue in the limitations section of the article.(3) Cross-sectional data can not study the HRQoL results of different populations and individuals in China under time changes; in the sampling process, the sample size of some populations (such as the age group of 16-25 years old) is slightly more than the quota, which may have a certain impact on the study.(4) in the sampling process, the sample size of some populations (such as the age group of 16-25 years old) is slightly more than the quota, which may have a certain impact on the study. Fig. 1 Fig. 1 Distribution of health utility values of the total samples of the three scales Table 2 Sample survey quotaThe research quota was based on the relevant data of China Statistical Yearbook 2021 Table 3 Sociodemographic characteristics of the research sample (N = 5000) Table 4 Bland-Altman analysis results Table 5 ICC analysis results
2023-09-12T06:18:11.691Z
2023-09-11T00:00:00.000
{ "year": 2023, "sha1": "6155b18ee617358a33f7764739b7a71492ad182e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11136-023-03512-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7c7adac302bc0b5107608212e4c3c344e64f4a5c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248468032
pes2o/s2orc
v3-fos-license
Smart Bone Graft Composite for Cancer Therapy Using Magnetic Hyperthermia Magnetic hyperthermia (MHT) is a therapy that uses the heat generated by a magnetic material for cancer treatment. Magnetite nanoparticles are the most used materials in MHT. However, magnetite has a high Curie temperature (Tc~580 °C), and its use may generate local superheating. To overcome this problem, strontium-doped lanthanum manganite could replace magnetite because it shows a Tc near the ideal range (42–45 °C). In this study, we developed a smart composite formed by an F18 bioactive glass matrix with different amounts of Lanthanum-Strontium Manganite (LSM) powder (5, 10, 20, and 30 wt.% LSM). The effect of LSM addition was analyzed in terms of sinterability, magnetic properties, heating ability under a magnetic field, and in vitro bioactivity. The saturation magnetization (Ms) and remanent magnetization (Mr) increased by the LSM content, the confinement of LSM particles within the bioactive glass matrix also caused an increase in Tc. Calorimetry evaluation revealed a temperature increase from 5 °C (composition LSM5) to 15 °C (LSM30). The specific absorption rates were also calculated. Bioactivity measurements demonstrated HCA formation on the surface of all the composites in up to 15 days. The best material reached 40 °C, demonstrating the proof of concept sought in this research. Therefore, these composites have great potential for bone cancer therapy and should be further explored. Introduction Cancer is the second leading cause of death worldwide, after heart disease, and is, thus, an important barrier to increasing life expectancy. This disease typically initiates due to mutations in genes that result in abnormal cell division and growth [1]. There are more than 100 different types, one of which is bone cancer [2]. According to the American Cancer Society, the occurrence of more than 3600 new cases of bone cancer is estimated for 2021 in the US [3]. Osteosarcoma is the most common type of primary bone cancer, followed by chondrosarcoma and Ewing sarcomas. The treatment is usually based on surgical removal of a tumor, followed by or combined with complementary treatments, such as chemotherapy, radiotherapy, hyperthermia and immunotherapy [4]. Most of these therapies provide solutions that are not selective enough because they destroy not only the malignant tumor, but also many healthy cells. Thus, a great challenge is to develop a therapy that cures this potentially fatal disease with minimum side effects [5,6]. Hyperthermia (HT) is a type of cancer therapy, in which the cancer cells are heated by external agents to a temperature of~43 • C. This induces cell death by destructing proteins and inhibits the formation of new blood vessels, with little or no harm to normal tissue. This therapy varies according to the heat source and can be used as adjunctive treatment associated with radiotherapy and chemotherapy. The approach adopted to increase the local temperature using an external alternating current magnetic field (AMF) is called magnetic hyperthermia (MHT). MHT uses the heat response generated by magnetic particles when subjected to a magnetic field, forming an internal heat source without any chemical substances or severe toxicity [7,8]. MHT science has been growing, although several challenges are still being discussed by the scientific community for clinical applications, such as: (a) thermal conversion efficiency; MHT devices should have the capacity of accurately delivering high thermal energy using a low mass of magnetic particles; (b) the field frequency and magnitude should be selected to minimize the production of eddy currents and dielectric heating, which can generate undesirable nervous and muscle responses; (c) the magnetic particles can be administered by different routes, such as intravenous, subcutaneous, intratumoral (surgically or not), or oral administration. For all these cases, these particles should be formed by biologically inert or bioactive support (liquid or solid) necessary to make them compatible within the body; (d) the magnetic material must be precisely controlled to prevent local overheating or heterogeneous temperature distribution in tumor mass and ensure the heat transference to local treatment [8][9][10]. Superparamagnetic and ferromagnetic iron oxide particles (Fe 3 O 4 ) are usually considered as candidates for MHT due to their adequate magnetic properties, such as high specific absorption rate (SAR), high saturation magnetization, and biocompatibility; however, one drawback of these materials is their far too high Curie temperature, which can reach 585 • C. Magnetic-field-induced heating occurs when the material presents magnetic order. Above Tc, the material changes to a non-ordered state, then it no longer responds (thermally) to the external field. High Curie temperatures give rise to uncontrolled and non-uniform tumor heating, which, in turn, may destroy the healthy adjacent tissues [11,12]. To achieve a 'self-controlled' temperature and avoid local overheating, materials that present a Tc within the MHT temperature range of interest have been studied. In this view, lanthanum-strontium manganites (LSM) are of particular interest. The LSM crystalline phases are manganese oxide-based compounds (manganites), with the formula R (1−x) A x MnO 3 , where the R sites are substituted by the rare earth metal-lanthanum, and A by strontium. In the case of LSM, the Tc can be tailored within the temperature range of interest by cationic (x) composition variations that influence distortions and the mixed valence of manganese. It is also possible to change the characteristic temperature depending on several parameters, such as phase composition, particle size and shape, particle arrangement, and ac-field frequency [13,14]. Recent review papers have been published summarizing the physicochemical and magnetic properties of these materials, influenced by the synthesis methods and reaction conditions, as well as by the microstructural parameters, such as particle size, surface coating (nature/amount), stoichiometry, concentration and/or applied AMFs (including magnetic field (H) and frequency (f)) in MHT application [15,16]. In the case of osteosarcoma, for example, it is common for a tumor to be surgically removed, leaving a significant bone defect behind, which must be filled with a graft. Besides the massive bone lesion, osteosarcoma has a high recurrence rate, requiring additional treatments. Thus, a bone graft having a balance between bioactivity and magnetic properties would be highly desirable. In this work, we studied the effect of La 0.8 Sr 0.2 MnO 3 (LSM) additions in powder form in the sinterability and in vitro bioactivity of F18 bioactive glass, as well as the magnetic properties of the composites obtained. Our purpose was to develop a smart composite material having a double function: (1) to regenerate the bone tissue after tumor removal and (2) to kill remaining or recurrent cancer cells if needed, having the advantage of not overheating the neighboring healthy cells. [17][18][19][20], samples were prepared using mixtures of La 2 O 3 , Mn 2 O 3 , and SrCO 3 in the cation molar ratio of 0.8La:0.2Sr:1Mn. They were homogenized in a planetary ball mill (Pulverisette 6-FRITSCH) at 350 rpm for 60 min, with anhydrous isopropyl alcohol. The slurry was dried at 100 • C/24 h and the resulting powder was uniaxially pressed into ø = 10 mm discs, using a pressure of~200 MPa. These discs were pre-sintered in an electric muffle furnace at 1300 • C for 3 h, with a heating rate of 10 • C/min. After cooling, these discs were fragmented by a mortar agate until the particles passed through a 1mm sieve and, following that it was milled with isopropyl alcohol in a planetary mill for 30 min, with a rotating speed of 350 rpm reaching the average particle size of 1.7 µm. Preparation of Magnetic Biocomposites The F18 glass [21] was kindly provided by the VETRA-HighTech Bioceramics company, Ribeirão Preto-SP, Brazil. The glass frit was ground into a powder (average particle size~5 µm) in the planetary ball mill (550 rpm/60 min). Then, the bioactive glass powder was mixed with the magnetic phase strontium-doped lanthanum manganite La 0.8 Sr 0.2 MnO 3 at 5, 10, 20, and 30 wt% (denominated LSM5, LSM10, LSM20 and LSM30, respectively). These compositions were homogenized in the planetary ball mill with isopropyl alcohol at 150 rpm for 30 min. The powder mixture was uniaxially pressed into discs of ø 10 mm (~200 MPa), heated to 650 • C (heating rate of 10 • C/min) and cooled within the furnace. Influence of LSM on the Sinterability of Bioactive Glass F18 The influence of LSM powder on the sinterability of F18 bioactive glass was studied by a heating stage microscope MISURA HSM ODHT 1400 (Expert System Solutions). To this end, cylindrical pellets (ø3 mm × 3 mm) were uniaxially pressed at 200 MPa. The green density of the pellets was approximately 55%. The sintering measurements were performed to a maximum temperature of 900 • C with a heating rate of 10 • C/min. The cross-section area image projected during the sintering was recorded every 1 • C and it was used to calculate shrinkage rate of the pellets. The density during the sintering process was calculated using Equation (1): where ρ 0 is the relative density of the green body, h r is the relative height of the sample calculated by the ratio with initial height (h/h 0 ), and A r is the relative area of the sample calculated by the ratio with an initial area (A/A 0 ) [22]. The theoretical density of the composites was calculated by the rule of mixtures, Equation (2), which considers the weight percentage and the density of the phases that form the composites. The weight percentage (w i ) varies according to the composite composition and the densities (ρ i ) of the phase F18 bioactive glass and the LSM are 2.6 g/cm 3 and 6.5 g/cm 3 , respectively. Microstructure The LSM pure magnetic powder and composite samples were 200-mesh-sieved to be used for various tests. The phase composition was analyzed by X-ray powder diffraction (XRD: Rigaku Ultima IV) with Cu K-alpha radiation (k = 0.15 nm, 40 kV, 20 mA). The HighScore Plus program was used for phase identification and the quantification was carried out by the Rietveld refinement method. A scanning electron microscope (SEM: Philips XL30 FEG, F.E.I. Company, Hillsboro, OR, USA) was used to observe the particle Magnetization measurements as a function of applied magnetic field (MxT) were performed using a vibrating sample magnetometer (VSM), by Quantum Design MPMS3 SQUID VSM, at three different temperatures (250, 300 and 350 K). The DC magnetic susceptibility measurements χ DC (T) = M H as a function of temperature, were performed using zero-field-cooling/field-cooling (ZFC/FC) for pure LSM phase and the LSM20 sample. In the ZFC/FC measurements, the sample was cooled from room temperature to 200 K, without a magnetic field (zero-field cooling), and then a magnetic field H = 100 Oe, 1 kOe, and 5 kOe was applied. χ DC (T) was measured while the sample was heated with a rate of 2 K/min. Afterwards, the process was repeated, but during the cooling, a magnetic field was applied (field cooling) and χ DC (T) was measured. This analysis allows one to accurately determine the Curie temperature (T c ) and study the influence of temperature on the magnetic properties. Calorimetry measurements were used to evaluate the composite heating ability when exposed to an alternating magnetic field (135 kHz, 100 Oe). Biocomposite bulk samples with dimensions of ø5 × 1 mm were inserted into plastic tubes (Eppendorf TM ) with 200 mg of water and isolated with polystyrene foam. These samples were placed inside a solenoid coil which was 14 mm in diameter, 87 mm in length, had 15 turns and 1.1 µH of inductance (see Figure 1). Then, the initial temperature was stabilized at 24 • C and temperature rise was recorded using a fiber-optic thermometer (Qualitrol NOMAD-Touch Portable Fiber Optic Monitor, ± 0.5 • C). All measurements were carried out within a time period of 500 s. For more details about the hyperthermia system, readers are referred to reference [23]. After a long period of time, the temperature will tend to a constant m called Tmax. This parameter can be obtained using Equation (3), which descr tion of temperature as a function of time from the adjustment of the exp obtained [24]: where 0 is the initial temperature (24 °C) and ∆Tmax is the temperature ch After a long period of time, the temperature will tend to a constant maximum value called T max . This parameter can be obtained using Equation (3), which describes the variation of temperature as a function of time from the adjustment of the experimental data obtained [24]: where T 0 is the initial temperature (24 • C) and ∆T max is the temperature change from the initial stage to a constant maximum value (steady state). The specific absorption rate (SAR) is the amount of electromagnetic power absorbed per unit of mass, and it is described as the efficiency of the magnetic particles to produce heat in response to an external alternating magnetic field. The SAR value is affected by different factors, such as particle size, shape and magnetic properties, and magnetic field parameters, i.e., frequency and magnitude [15,25]. The SAR was calculated according to Equation (4), which was adapted from Iglesias [26], considering a non-adiabatic system and employing parameters associated with the energy flow between neighboring systems. where C susp is the heat capacity of aqueous dispersion (J/K), m p is the mass of magnetic particles (g), T 0 is the initial temperature ( • C), t is the exposure time, and α is the conductance, a parameter that measures the intensity of the interaction between neighboring systems. The heat capacity of water was taken as 4.2 J/g • C. To calculate the heat capacity of biocomposite samples, we used the rule of mixtures considering 0.66 J/g • C and 0.85 J/g • C, for LSM and the multicomponent F18 silicate glass, respectively. In Vitro Bioactivity The in vitro bioactivity test was obtained using a simulated body fluid (SBF) solution, according to the method proposed by Kokubo et al. [27]. Biocomposite bulk samples (~ø10 mm × 3 mm) were soaked in SBF at 36.7 • C under continuous shaking for various different times. These tests aimed to verify the rate of HCA layer formation in the sample surfaces. The samples were taken out from SBF after soaking from 24 h to 15 days, and then were carefully rinsed with acetone and dried at room temperature. After that, the samples were analyzed by using a scanning electron microscope (SEM, Philips XL30 FEG) and by FTIR (SPECTRUM GX-DE, Perkin-Elmer Co, Waltham, MA, USA), collected in the 4000-400 cm −1 range. Both techniques were used to identify different morphologies of the materials before and after soaking in SBF. Figure 2a shows the sintering curves of the composites and describes the main characteristic temperatures, such as sintering, softening, expansion, and melting. These temperatures are summarized in Table 1. The sintering temperature (660 • C) of the composites was determined by the sintering curve, corresponding to the maximum linear shrinkage for all samples. The F18 bioactive material is a crystallization-resistant glass; therefore, sintering by viscous flow will take place to completion before surface crystallization. This fact can be confirmed in Figure 3, where the relative density at saturation reached 1.0 for pure F18 bioactive glass. Table 1. Sintering Temperature (T s ), Softening Temperature (T a ), Maximum densification temperature (T max ) in sintering saturation. The typical error in these temperatures is ±5 • C. the densification mechanism takes place with a maximum rate. The maximum densifica-tion temperature lies in the range of 575-625 °C. Thus, at the sintering temperature of 660 °C, all compositions pass through the maximum densification range. For temperatures above 700 °C, a significant expansion was observed for pure F18 glass and all other samples. This could be caused by the expansion of the entrapped air inside the sintered structure or perhaps degassing. Both of these mechanisms, isolated or combined, lead to the formation of bubbles and sample swelling. The calculated relative density curves are shown in Figure 3. All samples showed high density at saturation, with the presence of a single shrinkage stage. This behavior resulted from viscous flow sintering and is characterized by a sharp shrinkage in a short period. It can be observed that the compositions LSM5 and LSM10 showed maximum densification. The addition of LSM did not decrease the overall densification of the composite. Figure 4 shows the XRD pattern of the LSM particles. The magnetic phase was well crystallized, considering the position of diffraction peaks corresponding to the LSM standard diffractogram [ICSD 51655]; no signs of a secondary crystalline phase were detected. The XRD patterns were analyzed by Rietveld refinement to obtain the structural parameters. The calculated lattice parameters (a and c), unit cell volume (V), index Rwp and goodness of fit (S) are given in Table 2. The Rwp and S are Rietveld refinement parameters that indicate the relationship between calculated XRD intensities and experimental XRD intensities are considered satisfactory in a good refinement when the value is among 10% and 20%, and 1, respectively. The LSM particles present a single phase with no detectable impurities and they have a rhombohedral structure with the Rc3 (167) space group due to the replacement of La by Sr ions [28]. As the content of LSM is increased (Figure 2), the onset of sintering is slightly shifted to higher temperatures. The shrinkage rate also decreased, which was expected, as the LSM particles act as barriers for viscous flow. However, except for sample LSM30 (A/A 0 = 18%), all the compositions reached a final shrinkage of~20% at saturation. To ensure maximum densification of the composites during sintering, the derivative of the sintering curve ( Figure 2b) was evaluated. The minimum point indicates the temperature at which the densification mechanism takes place with a maximum rate. The maximum densification temperature lies in the range of 575-625 • C. Thus, at the sintering temperature of 660 • C, all compositions pass through the maximum densification range. For temperatures above 700 • C, a significant expansion was observed for pure F18 glass and all other samples. This could be caused by the expansion of the entrapped air inside the sintered structure or perhaps degassing. Both of these mechanisms, isolated or combined, lead to the formation of bubbles and sample swelling. The calculated relative density curves are shown in Figure 3. All samples showed high density at saturation, with the presence of a single shrinkage stage. This behavior resulted from viscous flow sintering and is characterized by a sharp shrinkage in a short period. It can be observed that the compositions LSM5 and LSM10 showed maximum densification. The addition of LSM did not decrease the overall densification of the composite. Figure 4 shows the XRD pattern of the LSM particles. The magnetic phase was well crystallized, considering the position of diffraction peaks corresponding to the LSM standard diffractogram [ICSD 51655]; no signs of a secondary crystalline phase were detected. The XRD patterns were analyzed by Rietveld refinement to obtain the structural parameters. The calculated lattice parameters (a and c), unit cell volume (V), index R wp and goodness of fit (S) are given in Table 2. The R wp and S are Rietveld refinement parameters that indicate the relationship between calculated XRD intensities and experimental XRD intensities are considered satisfactory in a good refinement when the value is among 10% and 20%, and 1, respectively. The LSM particles present a single phase with no detectable impurities and they have a rhombohedral structure with the Rc3 (167) space group due to the replacement of La by Sr ions [28]. The particle morphology and size distribution of the LSM and F18 bioactive glass shown in Figure 5a,b, respectively. The particles have irregular shapes and large agglo erates that resulted from the grinding process (milling balls). The LSM particles hav The particle morphology and size distribution of the LSM and F18 bioactive glass are shown in Figure 5a,b, respectively. The particles have irregular shapes and large agglomerates that resulted from the grinding process (milling balls). The LSM particles have a monomodal particle size distribution, with an average particle size of 1.7 µm. On the other hand, F18 bioactive glass particles present an average particle size of 4.5 µm, with a bimodal particle size distribution. This difference is important for the encapsulating process of the LSM particles through the glass matrix during sintering by viscous flow. Figure 6 shows the XRD patterns of all composites to verify whether crystallization of the vitreous matrix occurred during the sintering process. The XRD patterns were similar, and only the magnetic crystalline phase LSM was identified in all samples. There were no signs of F18 bioactive glass crystallization. The bioactive glass F18 has greater stability and less tendency to crystallize compared to other bioglasses, allowing the sintering process to occur without forming surface crystals that would hinder the viscous flow process [30]. Figure 6 shows the XRD patterns of all composites to verify whether crystallization of the vitreous matrix occurred during the sintering process. The XRD patterns were similar, and only the magnetic crystalline phase LSM was identified in all samples. There were no signs of F18 bioactive glass crystallization. The bioactive glass F18 has greater stability and less tendency to crystallize compared to other bioglasses, allowing the sintering process to occur without forming surface crystals that would hinder the viscous flow process [30]. Figure 6 shows the XRD patterns of all composites to verify whether crystallization of the vitreous matrix occurred during the sintering process. The XRD patterns were similar, and only the magnetic crystalline phase LSM was identified in all samples. There were no signs of F18 bioactive glass crystallization. The bioactive glass F18 has greater stability and less tendency to crystallize compared to other bioglasses, allowing the sintering process to occur without forming surface crystals that would hinder the viscous flow process [30]. Magnetic Properties and Heating Efficiency under Alternating Magnetic Field The magnetization measurements, as a function of applied magnetic field of the sample (MxH) LSM at 250 K, 300 K, and 350 K, are shown in Figure 8. The saturation magnetization (Ms), coercive field (Hc), and remanent magnetization (Mr) are summarized in Ta- Magnetic Properties and Heating Efficiency under Alternating Magnetic Field The magnetization measurements, as a function of applied magnetic field of the sample (MxH) LSM at 250 K, 300 K, and 350 K, are shown in Figure 8. The saturation magnetization (M s ), coercive field (H c ), and remanent magnetization (M r ) are summarized in Table 3. To compare the present results with other functional materials aimed at the same type of treatment, we have included the magnetic parameters of a previous paper [31]. The LSM exhibits a narrower magnetic hysteresis loop, which is representative of soft ferromagnetic materials with low coercivity and remanence at room temperature. Magnetic particles for biological applications are required to be soft magnets, which can be demagnetized with low coercive energy and retain some magnetization after removing the magnetic field. These properties make our material a suitable candidate for cancer treatment by magnetic hyperthermia, as this application requires a continuous magnetization process to generate heating. However, it is important to highlight that the magnetic properties of any ferromagnet depend on many factors, such as particle size, shape, crystalline defects, and surface effects. At 350 K, the LSM exhibits paramagnetic behavior without the presence of magnetic hysteresis, i.e., this corresponds to a temperature above the Tc of this material [32]. Materials 2022, 15, x FOR PEER REVIEW presence of magnetic hysteresis, i.e., this corresponds to a temperature abov material [32]. Figure 8. Magnetization as a function of the applied magnetic field of LSM particle 300 K (green), 350 K (red)-accuracy 10 −8 emu. The inset exhibits details of low field Figure 9 presents magnetic susceptibility as a function of temperatur ing the ZFC/FC protocol LSM particles, for H = 100 Oe, 1 kOe and 5 kO respectively). Figure 9 presents thermomagnetic irreversibility below TC and FC, associated to competition between magnetocrystalline anisotropy netostatic energy that leads a separation of ZFC/FC curves. This behavior i to the presence of multi-domain magnetic particles. At this temperature, between the magnetocrystalline anisotropy and the magnetostatic energ null, so that to reduce the total energy, the formation of domain walls is favo multidomain structures [23,33]. The relationship between particle size and the magnetic properties of netic materials has been widely reported. The critical particle size of typica is <30 nm and this size indicates the transition from a single-to multidom which totally changes their magnetic behavior. The LSM average particl Figure 9 presents magnetic susceptibility as a function of temperature (χ DC (T)) using the ZFC/FC protocol LSM particles, for H = 100 Oe, 1 kOe and 5 kOe (Figure 9a-c, respectively). Figure 9 presents thermomagnetic irreversibility below T C, between ZFC and FC, associated to competition between magnetocrystalline anisotropy and the magnetostatic energy that leads a separation of ZFC/FC curves. This behavior is expected due to the presence of multi-domain magnetic particles. At this temperature, the difference between the magnetocrystalline anisotropy and the magnetostatic energy is no longer null, so that to reduce the total energy, the formation of domain walls is favorable, forming multidomain structures [23,33]. However, this fact is not clearly observed in Figure 9 b,c, due to the fact that for hig the magnetization of another phase was probably saturated. These figures also sho differences in Tc, calculated from the peaks of ( ) slopes. These differences a nificant and associated with small variations in magnetization around Tc. Above the Curie temperature, there is no magnetic hysteresis, but the param phase exhibits a typical Curie-Weiss behavior ( ( ) ∝ 1 ) due to the presence ordered magnetic moments in the paramagnetic phase. Therefore, this movement netic moments in the paramagnetic state must be considered because it can also g magnetic losses in the form of heat above Tc [39]. To understand the effect of the bioactive glass matrix on the magnetic prope the LSM particles and evaluate the magnetic behavior of the composites, all MXH of composites at 300 K were normalized by the percentage of the LSM phase a shown in Figure 10. As expected, the saturation magnetization (Ms) and remanenc netization (MR) do not change and present similar values (Figure 8), showing that trix does not contribute to the magnetic properties of the composite, as expected [ the other hand, the coercivity of the samples does not change by increasing the LS tent in the composites. These composite magnetic parameters are shown in Figure Figure 9. Temperature dependence of the magnetization (ZFC-FC) of the LSM sample for different magnetic fields accuracy 10 −8 emu, and temperature uncertainty ± 0.5 K (a) 100 Oe; (b) 1 kOe; (c) 5 kOe. The relationship between particle size and the magnetic properties of the ferromagnetic materials has been widely reported. The critical particle size of typical ferromagnets is <30 nm and this size indicates the transition from a single-to multidomain structure, which totally changes their magnetic behavior. The LSM average particle size is larger than 1 µm; considering a multidomain structure, its magnetization process is a result, mainly, of the movement of the domain walls [32,34,35]. The Curie temperature (T C ) has been considered as the inflection point on the χ (T) curves, as shown in the inset in Figure 9. The LSM sample exhibits a ferromagneticparamagnetic transition (FM-PM) at 305 K (32 • C) and it is highlighted in the curves by a dashed line. This value is in agreement with the values reported in the literature [36]. However, it is worth mentioning that the variation in this value, depending on the magnitude of the applied field, results from the lack of structural homogeneity of the magnetic phases, which, in turn, affects the orientation of the moments with the application of the magnetic field [36,37]. There is a main peak and a "shoulder" in the χ(T) curves (see the inset in Figure 9a) that can be associated to the presence of more than one magnetic phase, which was not identified by XRD. The secondary phases may be the result of the formation of a magnetic phase with a more distinct stoichiometry than that of the La 0.8 Sr 0.2 MnO 3 phase. Similar results were found in reference [38]. These measurements can be correlated with the particle size, size distribution, anisotropy energy, magnetic ordering or phase segregation. However, this fact is not clearly observed in Figure 9 b,c, due to the fact that for high fields, the magnetization of another phase was probably saturated. These figures also show small differences in Tc, calculated from the peaks of χ DC (T) slopes. These differences are insignificant and associated with small variations in magnetization around Tc. Above the Curie temperature, there is no magnetic hysteresis, but the paramagnetic phase exhibits a typical Curie-Weiss behavior χ DC (T) ∝ 1 T due to the presence of nonordered magnetic moments in the paramagnetic phase. Therefore, this movement of magnetic moments in the paramagnetic state must be considered because it can also generate magnetic losses in the form of heat above Tc [39]. To understand the effect of the bioactive glass matrix on the magnetic properties of the LSM particles and evaluate the magnetic behavior of the composites, all MXH curves of composites at 300 K were normalized by the percentage of the LSM phase and are shown in Figure 10. As expected, the saturation magnetization (M s ) and remanence magnetization (M R ) do not change and present similar values (Figure 8), showing that the matrix does not contribute to the magnetic properties of the composite, as expected [40]. On the other hand, the coercivity of the samples does not change by increasing the LSM content in the composites. These composite magnetic parameters are shown in Figure 11a, The field-cooled (FC) and zero-field-cooled (ZFC) χ(T) curves of the LSM20 composite are shown in Figure 12, for magnetic fields of 100 Oe, 1 kOe and 5 kOe. The Tc is highlighted on the ZFC-FC curves by a dashed line (Figure 12). This sample exhibits a Curie temperature at 311 K (38 °C), a value above that found for the pure LSM phase. Earlier reports suggest the variation in Tc is a result of residual stresses suffered by the magnetic particles generated during the sintering, due to the differences in the thermal expansion coefficient (TEC) of the main crystalline phase and the matrix. The CTE of the F18 matrix and the magnetic phase LSM are, respectively, 15.5 × 10 −6 °C −1 and 11.4 × 10 −6 °C −1 [41][42][43]. Hence, the LSM particles may be under residual stresses. The magnetic behavior of the LSM phase is maintained in the LSM20 composite, even with the presence of the bioactive glass matrix. For low-magnitude fields (<100 Oe), the The field-cooled (FC) and zero-field-cooled (ZFC) χ(T) curves of the LSM20 composite are shown in Figure 12, for magnetic fields of 100 Oe, 1 kOe and 5 kOe. The Tc is highlighted on the ZFC-FC curves by a dashed line (Figure 12). This sample exhibits a Curie temperature at 311 K (38 °C), a value above that found for the pure LSM phase. Earlier reports suggest the variation in Tc is a result of residual stresses suffered by the magnetic particles generated during the sintering, due to the differences in the thermal expansion coefficient (TEC) of the main crystalline phase and the matrix. The CTE of the F18 matrix and the magnetic phase LSM are, respectively, 15.5 × 10 −6 °C −1 and 11.4 × 10 −6 °C −1 [41][42][43]. Hence, the LSM particles may be under residual stresses. The magnetic behavior of the LSM phase is maintained in the LSM20 composite, even with the presence of the bioactive glass matrix. For low-magnitude fields (<100 Oe), the The field-cooled (FC) and zero-field-cooled (ZFC) χ(T) curves of the LSM20 composite are shown in Figure 12, for magnetic fields of 100 Oe, 1 kOe and 5 kOe. The Tc is highlighted on the ZFC-FC curves by a dashed line (Figure 12). This sample exhibits a Curie temperature at 311 K (38 • C), a value above that found for the pure LSM phase. Earlier reports suggest the variation in Tc is a result of residual stresses suffered by the magnetic particles generated during the sintering, due to the differences in the thermal expansion coefficient (TEC) of the main crystalline phase and the matrix. The CTE of the F18 matrix and the magnetic phase LSM are, respectively, 15.5 × 10 −6 • C −1 and 11.4 × 10 −6 • C −1 [41][42][43]. Hence, the LSM particles may be under residual stresses. Calorimetry measurements were carried out to evaluate the thermal response of the composites under the influence of an alternating magnetic field, with field strength and The magnetic behavior of the LSM phase is maintained in the LSM20 composite, even with the presence of the bioactive glass matrix. For low-magnitude fields (<100 Oe), the ZFC/FC curves separate at a critical temperature, called thermomagnetic irreversibility temperature (Figure 12a). In the same way as the pure phase LSM, above Tc, there is a small positive susceptibility. However, the susceptibility magnitude is lower considering the pure phase LSM due to the presence of the non-magnetic bioactive glass matrix. Calorimetry measurements were carried out to evaluate the thermal response of the composites under the influence of an alternating magnetic field, with field strength and frequency of 100 Oe and 135 kHz, respectively ( Figure 13). An aqueous suspension of composite particles was used as a sample. We observed that after activation of the external magnetic field, these composites show an abrupt temperature increase, which saturates after a certain time, depending on the LSM particle concentration in the composite. The saturation temperature is called T max and it is close to, but below, Tc. frequency of 100 Oe and 135 kHz, respectively ( Figure 13). An aqueous suspension of composite particles was used as a sample. We observed that after activation of the external magnetic field, these composites show an abrupt temperature increase, which saturates after a certain time, depending on the LSM particle concentration in the composite. The saturation temperature is called Tmax and it is close to, but below, Tc. Hysteresis loss is the main loss process attributed to ferromagnetic particles above its critical diameter in magnetic materials with a multidomain structure. The heating is related to the area of hysteresis over a complete magnetization cycle, which happens due to various factors, such as defects in the crystal structure, movements of domain walls, anisotropy and frequency of the alternating magnetic field. In general, these mechanisms transform the magnetic energy to thermal energy under the influence of an alternating magnetic field and its efficiency is measured in terms of the specific absorption rate (SAR) [24]. The SAR value was estimated from the heating data using Equation (4) (Figure 14a). The SAR value was normalized by the mass of LSM and it has an estimated value of 3.5 W/g LSM. The fluctuations among the values obtained for the different compositions could be attributed to inaccuracies in the mass concentrations of LSM during the manufacture of composites. Considering the total mass of the composite, i.e., including the mass of F18 glass, the SAR value increased with the content of LSM (see in Figure 14a). The temperature increases with the addition of the LSM phase concentration, from initial temperature until Tmax. Even with the similar SAR value, Tmax achieved during the calorimetry test is different for composites because this behavior is related to the area of interaction between LSM-matrix constituent systems. For a higher LSM content, two effects are expected: (1) the particles interact with each other with the application of the magnetic field and (2) the heat transfer becomes more effective due to the larger contact area with the matrix. Figure 14b compares the Tmax values obtained experimentally and calculated by Equation (3). The experimental Tmax is close to Tc, and it presents values below the calculated Tmax. This happens because Tmax depends on both the features of heat exchange with environment and magnetic parameters of particles; in particular, on the disperse in the values of magnetization and Curie temperature [11]. Both calculated and experimental maximum temperature increase (ΔT) for each composition is shown in Table 4. As can be seen, Hysteresis loss is the main loss process attributed to ferromagnetic particles above its critical diameter in magnetic materials with a multidomain structure. The heating is related to the area of hysteresis over a complete magnetization cycle, which happens due to various factors, such as defects in the crystal structure, movements of domain walls, anisotropy and frequency of the alternating magnetic field. In general, these mechanisms transform the magnetic energy to thermal energy under the influence of an alternating magnetic field and its efficiency is measured in terms of the specific absorption rate (SAR) [24]. The SAR value was estimated from the heating data using Equation (4) (Figure 14a). The SAR value was normalized by the mass of LSM and it has an estimated value of 3.5 W/g LSM. The fluctuations among the values obtained for the different compositions could be attributed to inaccuracies in the mass concentrations of LSM during the manufacture of composites. Considering the total mass of the composite, i.e., including the mass of F18 glass, the SAR value increased with the content of LSM (see in Figure 14a). The temperature increases with the addition of the LSM phase concentration, from initial temperature until T max . Even with the similar SAR value, T max achieved during the calorimetry test is different for composites because this behavior is related to the area of interaction between LSM-matrix constituent systems. For a higher LSM content, two effects are expected: (1) the particles interact with each other with the application of the magnetic field and (2) the heat transfer becomes more effective due to the larger contact area with the matrix. The maximum temperature achieved was 39 °C, for the LSM30; however, to be effective in cancer hyperthermia, the active material must provide a local temperature rise up to 42-43 °C. In the case of the LSM-F18 glass composites, there are two plausible alternatives: to increase the LSM content or alter the LSM stoichiometry to obtain a magnetic phase with slightly higher Tc due to a decrease in the magnetization of the particles with increasing temperatures near Tc, which results in a decrease in heat generation. When developing a new material intended for cancer hyperthermia, not only the Tc of the magnetic phase, but also the following aspects must be considered: (1) the amount . The experimental T max is close to Tc, and it presents values below the calculated T max . This happens because T max depends on both the features of heat exchange with environment and magnetic parameters of particles; in particular, on the disperse in the values of magnetization and Curie temperature [11]. Both calculated and experimental maximum temperature increase (∆T) for each composition is shown in Table 4. As can be seen, ∆T is of approximately 5 • C for the composite containing 5 wt% of LSM and of 15 • C for the composite containing 30 wt% of LSM. The maximum temperature achieved was 39 • C, for the LSM30; however, to be effective in cancer hyperthermia, the active material must provide a local temperature rise up to 42-43 • C. In the case of the LSM-F18 glass composites, there are two plausible alternatives: to increase the LSM content or alter the LSM stoichiometry to obtain a magnetic phase with slightly higher Tc due to a decrease in the magnetization of the particles with increasing temperatures near Tc, which results in a decrease in heat generation. When developing a new material intended for cancer hyperthermia, not only the Tc of the magnetic phase, but also the following aspects must be considered: (1) the amount of the magnetic phase present within the composite; (2) the average particle size of the magnetic phase; (3) the thermal conductivity of the matrix; (4) the wettability of the magnetic phase by the glass during sintering; (5) the difference in thermal expansion coefficients between the magnetic phase and the matrix, whether resulting or not in residual stresses. All the aforementioned aspects may affect the overall Tc of the composite and, therefore, SAR and T max . Another aspect to be considered is the magnetic field parameters that could be used in real therapy. Our findings are in agreement with the recently published results by Shlapa et al. [11]. Both SAR and T max are very sensitive to the magnetic field parameters used. Figure 15 shows the relationship between the magnetic susceptibility and the magnetocaloric behavior, as a function of temperature for the LSM20 composite. Magnetic susceptibility is a measure of how much material may be magnetized when submitted to an applied magnetic field, and it is described by Curie's Law χ DC (T) = M H . When the material is heated, the magnetization becomes inversely proportional to the temperature and then decreases drastically with the temperature rise from the heating capability of the particles. of the magnetic phase present within the composite; (2) the average particle size of the magnetic phase; (3) the thermal conductivity of the matrix; (4) the wettability of the magnetic phase by the glass during sintering; (5) the difference in thermal expansion coefficients between the magnetic phase and the matrix, whether resulting or not in residual stresses. All the aforementioned aspects may affect the overall Tc of the composite and, therefore, SAR and Tmax. Another aspect to be considered is the magnetic field parameters that could be used in real therapy. Our findings are in agreement with the recently published results by Shlapa et al. [11]. Both SAR and Tmax are very sensitive to the magnetic field parameters used. Figure 15 shows the relationship between the magnetic susceptibility and the magnetocaloric behavior, as a function of temperature for the LSM20 composite. Magnetic susceptibility is a measure of how much material may be magnetized when submitted to an applied magnetic field, and it is described by Curie's Law ( ( ) = ). When the material is heated, the magnetization becomes inversely proportional to the temperature and then decreases drastically with the temperature rise from the heating capability of the particles. For MHT applications, using the magnetic material inside the body with a temperature ~36 °C, is a very sensitive phenomenon, mainly due to the smaller magnetization of the particles next to the Curie temperature. Hence, it should be considered for the design of new materials for self-controlled magnetic hyperthermia. Figure 15. Thermal behavior of the LSM20 composite around Tc with a magnetic field = 100 Oe. Figure 15. Thermal behavior of the LSM20 composite around Tc with a magnetic field = 100 Oe. For MHT applications, using the magnetic material inside the body with a temperaturẽ 36 • C, is a very sensitive phenomenon, mainly due to the smaller magnetization of the particles next to the Curie temperature. Hence, it should be considered for the design of new materials for self-controlled magnetic hyperthermia. In Vitro Bioactivity An ideal material for treating bone cancer by hyperthermia would comprise a double function: to kill the cancer cells trough heating in the first stage of the treatment and to promote healthy bone cell growth in a second stage. Marin et al. [44] demonstrated the capacity of the F18 bioactive glass in stimulating osteogenesis; in other words, it has the ability of regenerating bone tissue, which is very important for composite application. The in vitro bioactivity or the apatite formation ability of the composites was evaluated by an in vitro bioactivity test, using Kokubo's SBF-K9 solution. As shown in Figure 16, infrared spectroscopy analysis detected hydroxycarbonate apatite (HCA) layer formation after 48 h for the LSM5 samples ( Figure 16a) and after 15 days for the LSM30 (Figure 16d). The presence of an HCA layer was confirmed by the presence of three main peaks at approximately 1050 cm −1 (P-O stretch) and at 602 and 590 cm −1 (P-O bend) [21,45]. The three peaks are indicated by black arrows in Figure 16. ability of regenerating bone tissue, which is very important for composite application. The in vitro bioactivity or the apatite formation ability of the composites was evaluated by an in vitro bioactivity test, using Kokubo's SBF-K9 solution. As shown in Figure 16, infrared spectroscopy analysis detected hydroxycarbonate apatite (HCA) layer formation after 48 h for the LSM5 samples ( Figure 16a) and after 15 days for the LSM30 (Figure 16d). The presence of an HCA layer was confirmed by the presence of three main peaks at approximately 1050 cm −1 (P-O stretch) and at 602 and 590 cm −1 (P-O bend) [21,45]. The three peaks are indicated by black arrows in Figure 16. In general, by increasing the concentration of LSM, the onset of HCA crystallization shifts to longer times than pure F18 (onset~12 h [15]). This behavior was already expected, since, with the increase in the concentration of the magnetic phase, there is a reduction in the area of the glass on the surface of the composite exposed to the SBF solution, reducing the leaching of the ions necessary for the formation of the HCA layer. Figure 17 shows the SEM images of all sintered samples after soaking in SBF for 7 days. There is an HCA layer covering the surface of all samples after exposure to SBF. These globular formations, comprising intertwined HCA acicular crystals, are a crystalline habit commonly observed after the in vitro test. According to Souza et al. [21], with the increase in the bioactive glass exposure time to the SBF-K9 solution, there is an increase in the size of the globular HCA structures over its surface. This can be noted in the cases of the LSM5 and LSM10 composites, where globular structures were increased. The only exception was the LSM30 sample, where the surface was not completely covered by an HCA layer, even after 15 days (Figure 17d). In terms of apatite formation ability, the performance of these composites is better than other bioactive glasses and glass ceramics, except for the gold standard 45S5 bioglass and biosilicate glass ceramic found in the literature [46]. In general, by increasing the concentration of LSM, the onset of HCA crystallization shifts to longer times than pure F18 (onset~12 h [15]). This behavior was already expected, since, with the increase in the concentration of the magnetic phase, there is a reduction in the area of the glass on the surface of the composite exposed to the SBF solution, reducing the leaching of the ions necessary for the formation of the HCA layer. Figure 17 shows the SEM images of all sintered samples after soaking in SBF for 7 days. There is an HCA layer covering the surface of all samples after exposure to SBF. These globular formations, comprising intertwined HCA acicular crystals, are a crystalline habit commonly observed after the in vitro test. According to Souza et al. [21], with the increase in the bioactive glass exposure time to the SBF-K9 solution, there is an increase in the size of the globular HCA structures over its surface. This can be noted in the cases of the LSM5 and LSM10 composites, where globular structures were increased. The only exception was the LSM30 sample, where the surface was not completely covered by an HCA layer, even after 15 days (Figure 17d). Conclusions In this study, bioactive magnetic composites were developed through the sintering of F18 bioactive glass, containing gradual additions of a strontium-doped lanthanum In terms of apatite formation ability, the performance of these composites is better than other bioactive glasses and glass ceramics, except for the gold standard 45S5 bioglass and biosilicate glass ceramic found in the literature [46]. Conclusions In this study, bioactive magnetic composites were developed through the sintering of F18 bioactive glass, containing gradual additions of a strontium-doped lanthanum manganite (La 0.8 Sr 0.2 MnO 3 -LSM). ZFC/FC curves showed that the pure LSM phase has a Tc of~32 • C. However, when the LSM particles are constrained in the F18 bioglass matrix, the Tc is increased to~37 • C (composition LSM20). Both saturation magnetization and remanence increased with the LSM20 content, although not linearly. Calorimetry evaluation in aqueous medium revealed that the composites exhibit a fast temperature increase with time, reaching saturation within 5-8 min, depending on the LSM20 content. The measured temperature increases under an external magnetic field (∆T) ranged from 5 • C (LSM5) to 15 • C (LSM30). The magnetic susceptibility decreased drastically with increasing temperature, which, in turn, saturated at 2-3 • C below the Tc. The calculated values for the specific absorption rate were smaller than that estimated for the pure LSM phase (3.5 W/g), lying between 1.4 W/g (LSM5) and 3.0 W/g (LSM30). As proven by in vitro tests, all the composites in this work showed significant apatite formation ability; however, the addition of LSM increased the onset time for HCA formation, from 2 days (composition LSM5) to 15 days (LSM30). The best composite reached 40 • C in 500 s, which is quite close to the desired tumor treatment temperature (42 • C). This result demonstrates the proof of concept sought in this research. Therefore, these materials show great potential to be used as smart bioactive bone grafts in patients affected by bone tumors and warrant further development.
2022-05-01T15:14:20.416Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "bb6368813e9ed622f4052816407cdf1d253ac8f1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/9/3187/pdf?version=1651140150", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94c9e16843b5c3fd1aff2dca982f8ebce3df8f7a", "s2fieldsofstudy": [ "Medicine", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
232429125
pes2o/s2orc
v3-fos-license
Wireless battery free fully implantable multimodal recording and neuromodulation tools for songbirds Wireless battery free and fully implantable tools for the interrogation of the central and peripheral nervous system have quantitatively expanded the capabilities to study mechanistic and circuit level behavior in freely moving rodents. The light weight and small footprint of such devices enables full subdermal implantation that results in the capability to perform studies with minimal impact on subject behavior and yields broad application in a range of experimental paradigms. While these advantages have been successfully proven in rodents that move predominantly in 2D, the full potential of a wireless and battery free device can be harnessed with flying species, where interrogation with tethered devices is very difficult or impossible. Here we report on a wireless, battery free and multimodal platform that enables optogenetic stimulation and physiological temperature recording in a highly miniaturized form factor for use in songbirds. The systems are enabled by behavior guided primary antenna design and advanced energy management to ensure stable optogenetic stimulation and thermography throughout 3D experimental arenas. Collectively, these design approaches quantitatively expand the use of wireless subdermally implantable neuromodulation and sensing tools to species previously excluded from in vivo real time experiments. R ecent advances in resonant magnetic power transfer 1 , highly miniaturized device footprints 2 , and digital control of wireless and battery free and fully implantable systems 3 for the modulation of the central 4 and peripheral nervous system 5 result in highly capable platforms that enable stimulation and recording of cell type specific activity in the brain and the peripherals. The systems quantitatively expand the experimental paradigms that can be realized with rodent subjects through the ability to stimulate and record with multiple animals simultaneously 6 and at the same time have minimal impact on the subject behavior 2 enabling recordings in naturalistic ethologically relevant environments. The use of such devices has been almost exclusively demonstrated in rodents due to the availability of the genetic toolset for optogenetics 3,7 and genetically targeted fluorescent indicators 6 . The capabilities of these platforms however enable in principle the modulation of the central and peripheral nervous system beyond common animal models. Flying subjects have previously been only rarely used for optogenetic modulation 8 as they are commonly ruled out as species to perform behavioral experiments with due to the lack of optogenetic tools that can support their free movement. Freely moving songbirds represent a particularly interesting species to target with optogenetic devices due to their extensive use as model organisms to study mechanisms for human vocal learning and production 9 . The development of optogenetic tools in songbirds offers the ability to finely tune neural activity in specific pathways in real-time and affect vocal performance 10,11 . However, as shown in rodent species, the current methods of stimulation via optical fibers limit animal mobility 6 , induce stress 12 , cause mechanical damage 13 , and require advanced cable management 14 . These limitations essentially eliminate free flight and multi animal experiments in flying animals. Wireless and battery free devices with the ability to optogenetically stimulate and record temperature have not been utilized in the context of flying species due to challenges in primary antenna design, device miniaturization, and digital data communication throughout the typically larger experimental arena volumes required for such subjects. Here we introduce two new concepts that enable the operation of wireless battery free devices in songbirds. We employ the use of deep neural network analysis of behavior to identify the volume most occupied by the subject and design the primary antenna to elevate power delivery in these regions as well as introduce advanced power management that tailors digital communication to energy availability on the implant to enable reliable data uplink throughout the experimental arena. Results Wireless, battery free multimodal neuromodulation devices for songbirds. To achieve device properties that permit the use of wireless and battery free implantable devices in songbirds, device form factor, footprint, and size have to be tailored to the available subdermal space on the bird skull. In this work, we use zebra finches which belong to the family of songbirds, which have an average available head area of 0.825 cm 2 (Supplementary Information Fig. S1) and typically weigh 14.4 g 15 . This is a 10.5% reduction in available head space as compared to mouse animal models 16 .The resulting shape and form factor that allows for the highest energy harvesting capability and space to position optogenetic probes is an oblong outline that maximizes antenna size and conforms readily onto the skull of the subject (Supplementary Information Fig. S2). The layered makeup of the device is comprised of two copper layers insulated by a layer of polyimide carrier, populated with surface mounted components on the device top and bottom side, and encapsulated with parylene and elastomeric materials for modulus matching (Fig. 1a). The device platform is demonstrated with two device versions enabling stimulation and recording. Specifically, we created a device with capabilities in bilateral optogenetic stimulation (Fig. 1b) and a device architecture that enables both thermography and optogenetic stimulation (Fig. 1c). For either embodiment, the device relies on resonant magnetic coupling at 13.56 MHz 1 which can cast energy through a primary antenna, utilizing a commercial power amplifier readily available at this operational frequency, to the secondary antenna located on the implant (Fig. 1d). The captured magnetic field yields a current that is rectified and regulated by a low dropout regulator (LDO) to drive digital and analog electronics that handle optogenetic stimulus, digitalization and wireless communication. The bilateral optogenetic stimulation device has minimal electronic hardware requirements which results in a smaller footprint that enables facile placement of the stimulating probes that rely on blue stimulation micro-inorganic light emitting diodes (μ-ILEDs) controlled by the microcontroller (μC) with current limiting resistors R1 and R2. For the multimodal optogenetic stimulator and thermography device, an operational amplifier is used to condition signals from a ultra-small outline negative temperature coefficient (NTC) sensor to yield mK sensor resolution, a set of capacitors to store energy temporarily, and an infrared data uplink to relay signals wirelessly 2 . A photographic image of the backside of a device with thermography capabilities is shown in Fig. 1e, revealing the NTC sensor that can be placed in intimate contact with the skull to capture high fidelity thermal signatures of the subject. The inset of Fig. 1e shows an example of the device's temperature recording modality over a 4 h period. A typical experimental arena including exposure and recording chamber is shown in a photographic image in Fig. 1f where antenna and tuning system are installed at the righthand chamber. An image of the finch perching with the infrared (IR) LED activated during a data transmission event is shown in Fig. 1g, highlighting the miniaturized form factor of the device enabling naturalistic motion and no protruding features that allow for cohabitation with other birds in the colony. Behavior guided antenna engineering. In comparison to wireless subdermally implantable and battery free devices for the use in rodents, birds often occupy larger volumes of space. By comparison, standard mouse animal model experimental cage volumes are 45% smaller when compared to standard experimental arenas used for songbirds 1,7,17 . It is therefore important to maximize transfer efficiency in the most occupied locations of the experimental arena to enable power hungry applications that feature multimodal stimulation and recording. Because behavior of the animals can vary depending on experimental paradigm and species, we introduce a strategy that utilizes state of the art deep neural network analysis of subject behavior (DeepLabCut 18 ), originally developed for rodents. A simple webcam recording can be sufficient to track the animal's movement and subsequently build spatial position analysis that can be used to adjust primary antenna parameters as outlined in the workflow in Fig. 2a. Spatial position heat maps can be created for the top (Fig. 2b) and side (Fig. 2c) views used to record the animals. Details on deep neural network analysis can be found in the Methods section and in Supplementary Information Figs. S3, 4. The behavioral pattern indicates that most of the time is spent in the upper and lower half of the arena with close proximity to the arena walls. This behavioral pattern is maintained throughout the day as the bird frequently flies to the floor of the arena to get food and then returns to the perch located near the top of the arena. This spatial occupancy pattern however does not match the conventional antenna design introduced for wireless and battery free implants for rodents 7 . This primary antenna layout for a 20 cm x 35 cm x 35 cm experimental arena with equidistant antenna loops is shown in Fig. 2d and produces a relatively uniform power distribution with highest power values reached in the corners of the arena (23.34 mW) and average power at center of the arena volume (14.61 mW) found at a height of 9 cm from the cage floor (detailed measurement procedure outlined in the Methods section). This distribution however is not ideal for the behavioral patterns of the birds. To adjust the power distribution to match subject behavior, the primary antenna design was chosen in a Helmholtz-like configuration for the same arena size (Fig. 2e). This antenna arrangement produces a stronger resonant magnetic field in the lower and upper halves of the arena, with a power of 18.90 mW measured at the center for a height of 5 cm from the cage floor and a power of 16.64 mW at the center for a height of 25 cm, substantially increasing the power transfer at the horizonal levels most occupied by the birds. Maximum power, which is typically achieved in the corners of the cage because of inductive coupling dominated regimes closer to the antenna, increases by 80% for the Helmholtz-like coil configuration (42.03 mW) compared to the standard configuration (23.34 mW). This approach to modify antenna geometry to tailor power delivery to the subject based on spatial position is ubiquitous, enabling new experimental paradigms. Strategies to preferentially deliver power are demonstrated in Supplementary Information Fig. S5. It is also possible to change arena dimensions significantly if the same volume is maintained. This is demonstrated in Supplementary Information Fig. S6a where an arena length of 105 cm with cross-sectional dimensions of 15 cm x 15 cm can sustain device operation. Arenas with increased volumes are also possible by combining multiple RF power sources to achieve coverage for larger volumes ( Supplementary Information Fig. S6b-d) with arbitrary shape, significantly expanding the utility of the devices to investigate behavior with simultaneous neuromodulation. Equally important for the optimization of power transfer is the antenna design of the secondary antenna on the implant. Antenna geometry is designed to occupy the largest possible (Fig. 2f). Antenna performance is compared at 5.6 V, which is the maximum operation voltage for the LDO and capacitors used in this design. Typically, a higher number of turns result in a gain in harvesting ability because of a larger inductance of the antenna 19 . However, with higher inductance the maximum harvesting capability usually occurs at higher voltages which are not practical for use in highly miniaturized systems because small outline components generally also feature lower operation voltages. Figure 2g shows that ultimate peak harvesting capability, tested in a 20 cm x 35 cm x 35 cm experimental arena at a height of 25 cm with 10 W RF input, was achieved by a 10-turn dual layer antenna (5 turns on the top and bottom layer of the device) with trace width of 100 μm and trace gap width of 50 μm which harvests a peak power of 15.46 mW at a load of 5.6 kΩ and 9.23 V. The 8 turn antenna, which has 4 turns on the top and bottom layers of the device with identical trace spacing as the 10 turn device, in contrast has a lower peak harvesting capability of 14.14 mW which occurs at 7.38 V. This antenna option outperforms the 10 turn variant at 6.0 V and below significantly, surpassing the 10 turn device by 1.35 mW at the typical operation voltage of the rectifier at high loads (3.5 V). This optimization enables higher harvested power and improves usable footprint for electronics, sensors, and neuromodulation probes. Harvesting capability of the antennas scale linearly with RF field input power and a characterization is shown in Supplementary Information Fig. S7. Angular harvesting capability, an important consideration in freely behaving subjects due to behaviors such as feeding that can decrease harvested energy, exhibits linear decrease of harvested power (Supplementary Information Fig. S8) which is consistent with behavior observed in previous work 1 . Birds are usually social animals that often cohabitate small spaces, because of this behavior an investigation into detuning of the secondary antenna for two subjects in close proximity is conducted. Results shown in Fig. 2h reveal that power harvesting capability is only affected if the devices are overlapping, which suggests stable operation of many devices even in close proximity of subjects. Stimulation control and characterization. The small space available on the finch skull necessitates a significant reduction of electronics footprint by 37.5% compared to previous work in rodents 1 . At the same time, control capability has to increase due to requirements to stimulate many subjects simultaneously. To address these needs, a communication strategy leverages one wire-like communication protocols that utilize the EEPROM of the μC together with ON/OFF keying to modulate the power supply to the implants for one-way data communication (Fig. 3a). Specifically, sequences of pulses are sent to the device to select a stimulation state saved in the nonvolatile memory on the device (Fig. 3b). Combinations of 90 and 130 ms pulses which represent a logic 0 and 1, respectively are utilized to select a device and its program, that includes specific frequency, duty cycle, and probe selection of the optogenetic stimulus. Figure 3b outlines the basic wireless control principle in which the resonant magnetic field is modulated, indicated by an envelope of a pickup coil (black colored trace). The resulting supply voltage at the μC on the implant follows the power supply (orange trace graph), which is evaluated by internal timers and stores the results in nonvolatile memory which, after completion of the byte code, can be evaluated to set the stimulation state. Resulting μ-ILED timing (blue traces) can be used for a variety of stimulation states. The protocol in the top half set of graphs of Fig. 3b sets the device to operate at a frequency of 30 Hz, a duty cycle of 5%, and with left injectable probe on. The same device in the bottom half set of graphs is set to 10 Hz, a duty cycle of 15%, with the right probe on. All devices were set to operate at 10 mW/mm 2 , a graph of operating intensities and required electrical power 1,4 can be found in Supplementary Information Fig. S9. A detailed look at the protocol and subsequent state selection is shown in Fig. 3c. Here, 2 bits are allocated to device selection (four directly addressable devices) followed by 4 bits for frequency selection (16 states), 4 bits for duty cycle selection (16 states), and 2 bits for probe selection (left, right, and dual probe). The data were modulated with ON/OFF keying using a commercial RF power source (Neurolux Inc.) and custom timing hardware. Multiple devices in the same magnetic field may communicate with the system as shown in Fig. 3d and Supplementary Movie SV1. Mechanical layout of the stimulators was engineered to allow for maximum flexibility during implantation to enable targeting a large amount of brain regions without implant redesign. This is accomplished by stretchable serpentines linking the injectable probe and the device body. Figure 3e demonstrates this in a photographic image where the injectable probe is stretched substantially. The design is validated with finite element simulation (details in Methods section and Supplementary Information Figs. S10, 11). The inset in Fig. 3e shows a displacement of 1.2 mm (24% strain), resulting in a maximum strain in the copper layer of 4% which is well below the yield strain of copper, making it sufficiently and repeatably stretchable for implantation in a variety of positions. An example configuration of the dual probes is shown in Fig. 3f, where the probes are articulated to specific areas of the brain. Advanced power management and data communication strategies. Sensing and stimulation capabilities can expand the utility of implants dramatically, however wireless communication can be a major hurdle due to relatively high-power consumption and footprint requirements. Overcoming these challenges for highly miniaturized devices for the use in songbirds is critical to enable advanced interrogation and modulation capability. To demonstrate the sensing capabilities of the presented platform, we have included thermography capability with mK resolution. The sensing modality enables the observation of physiological baselines such as the circadian rhythm, song tempo, and mating behavior 20 . The low noise and high fidelity of this analog front end also showcases the capability to integrate sensing modalities that require low noise analog electronics that are stable over the duration of the device and animal model lifetime. A detailed layout of the thermography modality is shown in Fig. 4a. The layout was carefully considered to ensure high sensor fidelity and exclusion of impact of surrounding components on the sensing results. The NTC thermistor, shown in green, was placed in a location to prevent influence of parasitic heating caused by the μC, HF rectifying diodes, Zener protection diodes, and traces that route from these components. Measurements of steady state temperatures of these components in air can be found in Supplementary Information Fig. S12. Corresponding finite element analysis (FEA) validate measurements in air and simulate temperatures when implanted. The results indicate that device components do not influence the recorded temperature or affect the surrounding tissues. Increases in temperature during operation are <0.55°C at hotspots on the device and <0.005°C at the NTC sensor. An electrical schematic of the analog front end is shown in Fig. 4b. Here, the NTC thermistor (100 kΩ) is placed in a Wheatstone bridge configuration and the resulting voltage is amplified with a low power operational amplifier (117x gain). A calibration curve for corresponding ADC values and temperatures is provided in Fig. 4c over a dynamic range of 7.37 K indicating a resolution of 1.8 mK and accuracy of ±0.097 K when comparing against a digital thermocouple thermometer (Proster). The thermographic recording capabilities of the device can be used for a range of applications including chronic measurements of sleep-wake cycles in animal subjects as well as recording millisecond-level temperature changes as shown in Supplementary Information Fig. S13. Data uplink is achieved via IR digital data communication. This mode of data uplink has been successfully implemented in rodents 2 and is chosen here because of its ultra small footprint (0.5 mm 2 ) and low number of peripheral component needed. Unlike previous studies with rodents, energy throughout the arena is limited and poses a challenge for continuous data uplink. The simplified electrical schematic of the multimodal optogenetic stimulator and thermography device is shown in Fig. 4d. To enable optogenetic stimulation and data communication on a device with highly miniaturized footprint and therefore limited energy harvesting performance, advanced energy management is required. This is achieved via a capacitive energy storage that holds sufficient energy to support events with power requirements that surpass the harvesting capability. Specifically, this is achieved by a capacitor bank (6 x 22 μF capacitors with a maximum energy storage of 481 μJ and a footprint of 1.5 mm 2 ), the capacitive storage is charged to 5.6 V during events where the power consumption is low, such as μC sleep phases in between stimulation, digitalization, or data uplink, and energy can be withdrawn during high powered events such as data uplink. The charge state of the capacitor bank is indicated in the black voltage trace in Fig. 4d where a data uplink event is recorded. Initially, μC power consumption (orange trace) of 30 mW during sampling of the ADC and writing of the EEPROM is recorded (green background), followed by the data uplink event which requires a peak power requirement of 19.12 mW and an average power of 8.83 mW (violet background). To achieve lower capacitor bank size and therefore a highly miniaturized footprint, advanced data uplink management is introduced to keep the sending event short. Data uplink events are shortened by splitting data packages into two 8-bit fragments which are recombined to 12 bits at the receiver. This results in a sending event length of 16.9 ms, which requires an energy of 149 μJ, which is less energy than the capacitive energy storage holds (details of overall device power requirements in Supplementary Information Fig. S14). This buffer capability is visible during the voltage drop at the capacitive energy storage during sending events retaining the regulated system voltage and allowing for stable operation even in environments where harvested power does not meet demand on the device. The resulting capability to momentarily store harvested energy enables operation in experimental arenas that extend to volumes that provide lower average power than peak , and power across μ-ILED (blue trace) to program device for stimulation of left probe at 30 Hz and 5% duty cycle (top set of graphs) and stimulation of right probe at 10 Hz and 15% duty cycle (bottom set of graphs). c Protocol for state selection using 12 bits of data to select device, frequency, duty cycle, and stimulation probe. d Photographic image of three devices switching states wirelessly. e Photographic image of device serpentine stretched in the elastic regime with inset of finite element simulation indicating displacement of 1.2 mm (applied strain of 24%) and observed strain in the copper layer of 4%. f Example configuration of injectable dual probes for targeting specific areas of the finch brain. power demand on the device, significantly increasing experimental arena volumes. A typical setup is illustrated in Fig. 4e, where four receivers are placed at a height of 35 cm and receive signals generated by the implant to enable beyond line of sight data uplink from the test subject 21 . The resulting uplink performance in this experimental paradigm are shown in Fig. 4f and indicate stable data rates with no dropouts. Multimodality of the devices is demonstrated in Supplementary Movie SV2 showing two devices recording temperature and switching optogenetic stimulation protocols simultaneously. For the first time, multiple wireless and battery free devices capable of simultaneous recording and stimulation are demonstrated, enabling control over multiple subjects in the same experimental enclosure. The result of the miniaturization efforts is a device footprint and outline that allows for facile subdermal implantation. Figure 5a shows a micro-CT scan of the multimodal optogenetic stimulator and thermography device in axial (left) and sagittal/ coronal (right) orientation. The device is not visible from the outside and the low weight and profile result in minimal impact on the subject. This is quantified in subject behavioral experiments. Figure 5b shows heat maps displaying subject behavior over a 14 h time period mapped on a schematic of the experimental enclosure. The spatial position pattern of the bird before and after device implantation is not qualitatively affected. Quantitative indication of minimal impact is evident when computing the distance traveled during the experiment, which shows similar activity before and after device implantation. Impact of the magnetic field on the subjects is minimal as shown by similar studies that use magnetic fields as a power source 1 . Sound emitted by the setup was investigated by recording sound levels and analyzing frequency components in an empty experimental chamber. There was no measurable noise or change in noise with the system active ( Supplementary Information Fig. S15). Proof of concept stimulation capabilities of the device are tested by targeting Area X, a song-dedicated basal ganglia brain nucleus in adult male zebra finches. We use viral delivery pathways established in a prior study by Xiao et al. using adenoassociated virus expressing human channel rhodopsin into the ventral tegmental area (VTA) and taken up by dopaminergic neurons whose axons project to Area X. Prior studies show that continuous optogenetic stimulation of this VTA to Area X pathway over multiple days in tethered adult male zebra finches can alter their ability to pitch shift individual syllables within their songs in a learning task 11 . Here, we use their same viral vector and targeting strategy ( Supplementary Information Fig. S16) to stimulate Area X unilaterally over a single, short session (15-30 min) and elicit pitch shifts to demonstrate proof of principle. Stimulation parameters of 20 Hz and 15% duty cycle were used during the experimental sessions. Histological assessment confirms opsin expression and targeting of the probe ( Supplementary Information Fig. S16). Song behavior was recorded and analyzed just prior to and throughout the stimulation period (spanning 15-30 min). The basic unit of the bird's song is a motif and is displayed as a spectrogram where individual syllables within the motif are identified based on their structural characteristics (Fig. 5c, f). Here, measurements of fundamental frequency (f o , pitch) are only made from syllables within the song that have a clear, uniform harmonic structure, as per established criteria in the field 22 ( Supplementary Information Figs. S17, 18, and 19). Each syllable is then scored across 25 consecutive song renditions to examine millisecond by millisecond changes in f o (see Methods). Song analysis reveals that across multiple subjects, a statistically significant shift (downward-Subjects A, B or upward-Subject C) in the f o is detected with stimulation (Fig. 5d, e, g, h). Device operation for multimodal devices with thermography capabilities features stable operation across an extended time period following advanced energy management described above. The resulting system stability in freely moving subjects is shown in Fig. 5i. Sampling rate stays stable over two trials with 10 h of recording time. An example of temperature recording capability is shown in Fig. 5j where a section of 4 h recording shows the temperature minimum in the circadian rhythm of a songbird, indicated by the initial decrease followed by upward trend in temperature consistent with circadian fluctuations when the bird is resting 20 . Device function is checked via link stability measurements that consist of 5-minute-long measurements in a 20-cm-diameter circular experimental arena performed periodically with data rates indicating stable device performance over the course of 9 weeks after implantation when experiments are terminated (Fig. 5k). Discussion Device performance indicate a robust platform that enables the long term and high-fidelity readout of analog sensors with high precision and accuracy coupled with multimodal optogenetic stimulation with a footprint (50.62 mm 2 dual optogenetic probe device and 61.05 mm 2 multimodal optogenetic stimulator and thermography device), volume (15.19 mm 3 dual optogenetic probe device and 19.25 mm 3 multimodal optogenetic stimulator and thermography device) and weight (44 mg dual optogenetic probe device and 84 mg multimodal optogenetic stimulator and thermography device) that is sufficiently small for subdermal implantation in passerine birds. Successful optical modulation of songs in zebra finches with minimal impact on subject behavior offers advanced study of flying subjects. The implant function is enabled by advanced concepts for highly miniaturized battery free electronics. Specifically, primary antenna designs guided by deep neural network analysis of subject video to maximize transmission, advanced protocols for multimodal optogenetic stimulation control, and advanced energy management that strategically uses energy saved in local miniaturized capacitive energy storage to enable data uplink function in experimental enclosures suitable for birds. The combination of these advances provides a toolbox towards the device design for investigation platforms in flying subjects that substantially expands the operational conditions of wireless, battery free and subdermally implantable modulation, and recording tools for the central nervous system. The current device embodiment features thermal recording capabilities highlighting the ability to include a variety of sensors. Examples for possible future recording capabilities include photometric probes for cell specific recording which have been demonstrated in rodent subjects 2 and electrophysiological recording to capture non cell specific firing activity. The advances presented in this work provide foundational design approaches to enable tools for studies in songbirds for complex social interactions and chronic changes in song and behavior. Methods Device fabrication. Flex circuits were composed of Pyralux AP8535R substrate. Top and bottom copper layers (17.5 μm) on a substrate polyimide layer (75 μm) were defined via direct laser ablation (LPKF U4). Ultrasonic cleaning (Vevor; Commercial Ultrasonic Cleaner 2L) was subsequently carried out for 10 min with flux (Superior Flux and Manufacturing Company; Superior #71) and 2 min with isopropyl alcohol (MG Chemicals) and rinsed in deionized water to remove excess particles. Via connections were manually established with copper wire (100 μm) and low temperature solder (Chip Quik; TS391LT). Device components were fixed in place with UV-curable glue (Damn Good 20910DGFL) and cured with a UV lamp (24 W) for 10 min. The devices were encapsulated with a parylene coating via chemical vapor deposition (CVD) (14 μm). Electronic components. Components were manually soldered onto device with low temperature solder (Chip Quik; TS391LT). The rectifier was composed of two Schottky diodes, tuning capacitors (116.2 pF), and a 2.2 μF buffer capacitor. A Zener diode (5.6 V) provided overvoltage protection. A low noise 500 mA lowdropout regulator with fixed internal output (Fairchild FAN25800; 2.7 V) stabilized power to the μC. Six 22 μF capacitors in parallel served as capacitive energy storage to buffer high energy events. Small outline μC (AT-Tiny 84A 3 mm x 3 mm; Atmel) were used to regulate timing μ-ILED activation, readout and digitalization of the NTC, and IR communication. Tiny AVR programmer and USB interface were used to upload software onto the μC. Blue μ-ILED (CREE TR2227) current was set with a current limiting resistor (270 Ω) to control irradiance of the blue LED stimulation. An operational amplifier (Analog Devices ADA4505-1) was used in a differential configuration to readout the resistance of the NTC which was then digitized by the μC. A 0402 IR LED was used to transmit the modulated digital signal (57 kHz). RF characterization. Secondary antenna performance was empirically tested by varying tuning capacitors to produce the lowest voltage standing wave ratio at 13.56 MHz during reflection testing with a reflection bridge (Siglent SSA 3032X). Characterization of the secondary antenna was carried out with a cage in a Helmholtz-like configuration at the center of x-y coordinates and a height of 25 cm from the cage floor with an input power of 10 W. Characterization of primary antennas were carried out with standard (22 AWG) wire wrapped around a custom-built arena (35 cm x 35 cm x 20 cm). An auto-tuner (Neurolux) system was used to tune the cage to a radiofrequency of 13.56 MHz. The power distribution of the primary antenna was characterized by measuring the voltage produced by the primary antenna and a shunt resistor in series at a variety of equidistantly spaced locations at 5,9,13,17,21, and 25 cm from the cage floor. Electrical characterizations. Electrical properties of the state selection for dual optogenetic probe device and multimodal optogenetic stimulator and thermography device were determined with a current probe (Current Ranger Low-PowerLab) and oscilloscope (Siglent SDS 1202X-E). Alternating voltage of the cage was determined by measuring the voltage of a secondary antenna without a rectifier tuned to 13.56 MHz placed in the magnetic field (pickup coil). Voltage at the μC of the device was determined by measuring the voltage produced after the rectifier and 2.7 V LDO with an oscilloscope. Power to the μ-ILED was measured by passing the current entering the μ-ILED through a current probe (Current Ranger Low-PowerLab). Power harvesting capabilities of the device can be found in Supplementary Information Figs. S7 and S8. Voltage measurements at the capacitor bank during digitalization and sending events of the thermal sensing device were carried out by powering the device with a custom battery powered power supply in series with a current limiting resistor (900 Ω) to match the power input from the RF field, and subsequently measured with an oscilloscope. The power consumed before the LDO at input voltages that represent the capacitor bank range is shown in Supplementary Information Fig. S14. Voltage measurements of the IR LED during sending events were measured with oscilloscope (Siglent SDS 1202X-E). Power to the μC was determined by passing the current supplied by the LDO to the μC through a current probe (Current Ranger LowPowerLab). CT imaging. Micro-CT imaging was performed on post mortem skull preserved in formaldehyde. Images of the finch were acquired using a Siemens Inveon μ-CT scanner. "Medium-high" magnification, an effective pixel size of 23.85, 2 x 2 binning, with 720 projections made in a full 360 degree scan, along with an exposure time of 300 ms were used. A peak tube voltage of 80 kV and a current of 300 μA was used to obtain the left image of Fig. 5a and a peak tube voltage of 65 kV and a current of 400 μA was used to obtain the right image of Fig. 5a. Reconstruction was done using a Feldkamp cone-beam algorithm. Mechanical simulations. Ansys® 2019 R2 Static Structural was utilized for staticstructural FEA simulations to study the elastic strain in both the single and bilateral serpentines when stretched. The components of both devices, including the copper traces, PI, and parylene encapsulation layers were simulated in accurate layouts. The models were simulated using Program Controlled Mechanical Elements, the resolution of the mesh elements being set to 7 with a minimum edge length of 5.813 μm, and at least two elements in each direction in each body to ensure mesh convergence. The Young's modulus (E) and Poisson's ratio (ν) are E PI ¼ 4 GPa, ν PI ¼ 0:34 23 ; E CU ¼ 121 GPa, ν CU ¼ 0:34 24 ; E Parylene ¼ 2:7579 GPa, ν Parylene ¼ 0:4 25 . For both probes, a fixed support was added to the faces marked B in Supplementary Information Figs. S10 and S11. Strains for the single and bilateral serpentines were applied using a displacement as the load on the faces marked A and in the direction of the arrows shown and are 55% (2.1 mm) and 24% (1.2 mm), respectively. Thermal simulations. COMSOL ® Multiphysics Version 5 was used to create a finite element model to simulate thermal impact of device operation. The models were used to determine steady state temperatures of the device after 500 s of operation. Major heat sources were simulated by using the μC, rectifier, LDO, amplifier, μ-ILED, and IR LED as heat sources. Electrical components, copper traces, and PI were simulated in with component topologies and accurate layout. The mesh was generated with a minimum element size of 0.181 mm and maximum of 1.01 mm. Simulations were set up using natural convection in air with an initial temperature of 22°C to reflect benchtop experiments of device heating, operation in vivo was simulated with PBS as surrounding medium with an initial temperature of 39°C to mimic average body temperature of subject. Thermal input powers of each simulated component were set as follows: μC 1 mW; LDO 2 mW; rectifier 10 mW; IR LED 0.5 mW; μ-ILED 0.5 mW; and resistors 0.05 mW. The following thermal conductivity, heat capacity, and density was used for each simulated material: Capacitors: 3.7 W m −1 K −1 , 0.58 J kg −1 K −1 , and 2500 kg m Video tracking and motion analysis. Videos were recorded with two cameras (Anivia 1080p HD Webcam W8, 1920*1080, 30 FPS) mounted above and in front of the arena for a bird with implant (n = 3) and a bird without implant (n = 2). The summarized workflow for the tracking steps is available in Supplementary Information Fig. S3. Tracking of the head's position for both top-view and sideview videos was performed using DeepLabCut Version 2.2.b6 18 . A distinct body feature (beak of the bird) was chosen for the tracking. Training was performed individually for each camera view. The training session of the model was accomplished with 13 min of recorded video. The frame extraction rate of 1 frame/second was used to capture 780 frames from each video. The training was computed with a High-Performance Computer (University of Arizona HPC) with 200,000 iterations for each camera view. Software was set for tracking 14-hour video for both camera views. The results of the tracking session were extracted in Excel format containing the X and Y coordinates and the confidence value (Likelihood) of each data point. The data points with the confidence value greater than 99% were used for heat maps and analysis and were then inserted into SimBA (version 1.2) for the evaluation of the total distance traveled 26 as well as for the trace tracking plots ( Supplementary Information Fig. S4). The results of the tracking were used to draw the heat maps of each camera view with MATLAB (Version R2020a) 27 . Animal experiments. Implanted birds (n = 3) were acclimated in the testing arena prior to experiments. Song recordings and stimulation experiments were conducted in a bird cage (35 cm x 35 cm x 20 cm) with perches and water placed inside and housed within a sound attenuation chamber (Eckel Noise Control Technologies, Cambridge, MA). Birds were recorded and song analyzed from a 15 min period before stimulation and then over a 15-30 min period of stimulation with 10 W input power into the system. Arenas were cleaned and water was changed before each experiment. For temperature sensing experiments with a separate group of finches (n = 3), data uplink was established with four IR receivers placed at 35 cm from the base of the arena and connected to a computer with Arduino software. Data communication was recorded for 10 h with an input power of 10 W into the system. Subjects and surgical procedures. All animal use was approved by the Institutional Animal Care and Use Committee at the University of Arizona. Adult male zebra finches (n = 8) between 120 and 300 days post-hatch were used in this study. Male finches were moved to sound attenuation chambers (Eckel Industry, Inc., Acoustic Division, Cambridge, MA) under a 14:10 h light: dark cycle and acclimated to their new housing for at least several days prior to surgery. Cameras (Anivia 1080p HD Webcam W8, 1920*1080, 30 FPS) in the chamber recorded the birds' physical movements using iSpy software. Surgery was conducted on isofluraneanesthetized birds with the analgesic lidocaine injected subcutaneously under the scalp to minimize discomfort prior to device implant. The optogenetic device was implanted subdermally and affixed to the skull by surgical glue. The device consisted of a wireless energy harvesting module and an extendable, articulated stimulating probe with a blue 465 nm light emitting diode at the tip (μ-ILED, CREE TR2227). The stimulating probe was extended unilaterally into song-dedicated brain nucleus Area X using stereotaxic coordinates starting from the bifurcation of the midsagittal sinus (40º head angle, 3.5 mm rostral-causal, 1.62 mm medio-lateral, 3.1 mm depth). In the same surgery, finches received a bilateral injection of 700 nl of an adeno-associated virus (AAV) driving human channel rhodopsin (AAV1-CAG-ChR2-ts-HA-NRXN1-p2a-EYFP-WPRE from T. Roberts, U. Texas Southwestern Medical Center) targeting the ventral tegmental/substantia nigra complex (VTA/ SN C ) that sends nerve projections into Area X (37-40º head angle, 1.65 mm rostralcausal, 0.27-0.40 mm medio-lateral, 5.8-6.33 mm depth). Virus was loaded into a glass electrode pulled pipette fitted into a Nanoject II pressure-injector and backfilled with mineral oil then injected at a rate of 27.6 nl/injection every 15 s for a total of 700 nl per midbrain hemisphere. Prior work established the efficacy of this AAV in the optical excitation of the VTA to Area X pathway by blue light (465 nm) 11 and the subsequent shift in syllable pitch. After 5 min, the pipette was slowly retracted and the tip visually inspected for clogging. Birds were then returned to sound chambers following post-operative monitoring and allowed to recover for a week. Sulfatrim antibiotic was provided in the drinking water. Stimulation was performed between 3 to 4 weeks post-surgery. In two finch subjects, the device implant was affixed to the skull only without the injectable stimulating probe, in order to collect information about scalp temperature (Fig. 5i-k). Song recording and analysis. Morning song from lights-on was acquired from individually housed male finches using Shure 93 lavalier condenser omnidirectional microphones. Songs were digitized (PreSonus Audiobox 1818 VSL, 44.1 KHz sampling rate/24 bit depth, Niles, IL) and recorded by the freeware program, Song Analysis Pro (SAP, http://soundanalysispro.com/) 28 . Spectrograms were then viewed in SAP for further analysis and figures displayed using Audacity (https:// www.audacityteam.org/). Zebra finch song consists of a sequence of repeated syllables that comprise a motif (Fig. 5c, f and Supplementary Information Figs. S17, 18,19). Using SAP, and a semi-automated clustering program (VOICE) 28 . Prior power analyses determined that 25 consecutive syllable copies within a bird are sufficient to detect meaningful differences based on experimental condition 30 . The wav files for 25 harmonic syllables were then run in Matlab version R2014a using the SAP SAT Tools and PRAAT to obtain measurements of fundamental frequency, f o (pitch) as per prior work 22,31 . Individual f o values were plotted from 25 consecutive copies just prior to optical stimulation, "Pre-Stimulation" and compared to 25 copies of the same syllable during the "Stimulation" period at 20Hz/15P ( Supplementary Information Fig. S20). Statistical analysis. Because the syllable data did not fit a normal distribution, we used the non-parametric Wilcoxon signed-rank test for paired data in which the scores for the same syllable were compared between pre-stimulation and stimulation periods. Significance was set at p < 0.05 (IBM SPSS Statistics for Windows version 26, Armonk, NY), and p values are reported in Supplementary Information Table ST1. Tissue histology. Finches that received the AAV injection into VTA and optogenetic device implant into Area X were humanely euthanized with an overdose of isoflurane inhalant and then transcardially perfused with warmed saline followed by chilled 4% paraformaldehyde in Dulbecco's Phosphate Buffer Saline. Fixed brains were cryoprotected in 20% sucrose overnight and then coronally sectioned at 30 µm on a Microm cryostat. Tissue was processed for fluorescent immunohistochemistry, using a procedure similar to Miller et al. 2015 32 : Hydrophobic borders were drawn on the slides using a pap pen (ImmEdge, Vector Labs) followed by 3 X 5 min washes in 1X TBS with 0.3% Triton X (Tx). To block nonspecific antibody binding, the tissue was then incubated for 1 h at room temperature with 5% goat serum (Sigma-Aldrich #G-9023) in TBS/0.3% Tx followed by 3 x 5 min washes in 1% goat serum in TBS/0.3% Tx. Primary antibodies were incubated in a solution of 1% goat serum in TBS/0.3% Tx overnight at 4°C. For the VTA/SNc region, a primary rabbit polyclonal antibody was applied (1:500 of Tyrosine Hydroxylase-TH, Millipore Sigma #AB152) to mark dopaminergic cell bodies with a primary mouse monoclonal antibody to detect virus expression via Green Fluorescence Protein (1:100, ThermoFisher 3E6, #A11120). A "no primary antibody" control was performed during initial testing. The following day, sections were washed 5 x 5 min in 1x TBS/0.3% Tx and incubated for 3 h at room temperature in fluorescently conjugated secondary antibodies in 1% goat serum with 1x TBS/0.3% Tx (ThermoFisher 1:1000, goat anti-rabbit 647 #A-21245 for TH and goat anti-mouse 568 #A11031 for GFP). After secondary incubation, sections were washed 3 x 10 min in TBS followed by 2 x 5 min washes in filtered TBS. Slides were then cover-slipped in Pro-Long Anti-Fade Gold mounting medium (Invitrogen, #P36930) and imaged on a Leica DMI6000B with a DFC450 color CCD camera (Leica Microsystems, Buffalo Grove, IL) using the Leica LAS-X version 3.3 software. To assess whether the optogenetic probe induced damage in Area X, tissue sections were processed through a Nissl staining procedure (1% thionin, Fishersci #50520580), cleared in xylene (Fishersci X5-4), mounted in DPX (Sigma-Aldrich #6522), and visualized using a Nikon Eclipse E800 bright field stereoscope connected to an Olympus color CCD DP73 camera and CellSens software.
2021-04-01T06:17:20.626Z
2020-08-21T00:00:00.000
{ "year": 2021, "sha1": "bafcb10dda899cf83faaeec8699dff608933b6ba", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-22138-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a75cb6c54560d5b91a05f6f65c48b884d6af466d", "s2fieldsofstudy": [ "Engineering", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
54964913
pes2o/s2orc
v3-fos-license
N-partitioning , nitrate reductase and glutamine synthetase activities in two contrasting varieties of maize In order to identify useful parameters for maize genetic breeding programs aiming at a more efficient use of N, two maize varieties of contrasting N efficiency, Sol da Manhã NF (efficient) and Catetão (inefficient) were compared. Experiments were carried out under field and greenhouse conditions, at low and high N levels. The parameters analysed included total and relative plant and grain N content, biomass and the activities of nitrate reductase and glutamine synthetase in different parts of the plant. It was found that the translocation efficiency of N and photoassimilates to the developing seeds and the source-sink relations were significantly different for the two varieties. N content of the whole plant and grain, cob weight and the relative ear dry weight were useful parameters for characterizing the variety Sol da Manhã NF as to its efficient use of N. Enzymes activity of glutamine synthetase (transferase reaction) and nitrate reductase did not differ among the varieties. Introduction Nitrogen (N) is the most expensive nutrient and the one required in largest quantity by the majority of crops, especially maize.Although present at high levels in soils, the amounts of N in mineral forms are generally low.It is commonly observed that in natural ecosystems a continued loss of N occurs not only through its removal by plants but also by leaching, volatization and denitrification.It has been estimated that only 40% to 50% of the N applied as fertilizer is absorbed by the maize crop under tropical conditions (Peoples et al., 1995).The availability of N to plants in tropical climates is also impaired by environmental stress such as drought, waterloging, low fertility soils, aluminium, among others (Magalhães & Fernandes, 1993;Machado & Magalhães, 1995). Variations in N availability can affect plant development and grain production in maize (McCullough et al., 1994;Uhart & Andrade, 1995b).The effect of N availability on grain yields of maize can be assessed by physiological components such as the interception and efficient use of radiation and the partitioning of N to reproductive organs (Uhart & Andrade, 1995b).Parameters such as leaf area index, longevity of the leaf canopy and the efficient use of light in maize are all increased by N (Muchow & Davis, 1988).It is also known that both deficiency and excess N affect the partitioning of assimilates between vegetative and reproductive organs (Donald & Hamblin, 1976).N deficiency affects the supply of assimilates to the ear mainly through reductions in leaf area, photosynthetic activity and light absorption (Lemcoff & Loomis, 1986;Uhart & Andrade, 1995b). The flow and remobilization of carbon (C) and N to the kernels during the grain filling period depends on the source-sink relationship, which is influenced by genotype-environment interactions, that among others may be altered by management factors such as planting date, population density, nutrients and water (Uhart & Andrade, 1995a).Furthermore, stress conditions can undoubtedly act directly on availability, assimilation and metabolism of N, interfering with the activity of enzymes of N metabolism and probably the catabolism of amino acids, proteins and other nitrogenous compounds (Machado et al., 1992;Magalhães & Fernandes, 1993).Genetic improvement trials, where tolerance to abiotic and biotic stresses are focused, can result in the selection of genotypes that are more efficient to scavenge and use N under stress conditions (Tollenaar et al., 1991;McCullough et al., 1994). The selection of genotypes with a more efficient mechanism of N uptake and metabolism is a strategy aimed at increasing N utilization efficiency of the maize crop.Several trials for efficient use of N un-der conditions of low N availability have been carried out with maize (Thiraporn et al., 1987;Eghball & Maranville, 1991;Machado et al., 1992).In order to characterize and select genotypes for efficient use of N, several authors have used physiological and biochemical parameters, such as high nitrate in leaves (Molaretti et al., 1987), increased nitrate reductase activity (Cregan & Berkum, 1984;Feil et al., 1993), glutamine synthetase activities (Machado et al., 1992;Magalhães et al., 1993;Machado & Magalhães, 1995), or increased mobilization of N from leaves and stems to the kernels (Eghball & Maranville, 1991). The objective of this study was to identify morphological, physiological and biochemical parameters that could be used to help genetic improvement programs to get maize genotypes more efficient in the use of N. Plant material The two maize varieties used in this study, Sol da Manhã NF and Catetão, are considered to be highly and poorly efficient in the use of N, respectively (Machado, 1997).Catetão is a local variety with flint and dark orange kernel and whose germplasm is derived mainly from Cateto race (Paterniani & Goodman, 1978;Soares et al., 1998).Sol da Manhã NF has semi to flint kernels, yellow-orange showing segregation for white kernels.Its germplasm is derived from 36 populations of Central and South America mainly from Cateto, Eto, and Flints from the Caribbean (Machado, 1997).Both varieties contain germplasm of similar origin.Although the variety Catetão is little used today in view of its low yields, in this study it was useful as a control to evaluate the evolution of efficiency from an old to a more modern variety (Sol da Manhã NF). Experimental design and analytical methods Two trials were carried out, one in field and another in greenhouse.The experimental design was a factorial with treatments arranged in a randomized complete block design with four replications.The data were subjected to analysis of variance and when F was significant at (p<0.05) the LSD (least significant difference) was applied. The field experiment was carried out in tanks measuring 20 x 3 m and 2 m deep, containing a Red-Yellow Pod-zolic soil, limed and fertilized as indicated by soil analysis.Each plot consisted of a 3 m row.After thinning, the final stand was 15 plants per plot.N fertilizer for the low N level treatment was all applied at planting, at the rate of 10 kg/ha of N in the form of urea.For the high N level treatment, 100 kg/ha of N was applied: 1/3 at planting, and 2/3 at 45 days after planting, as urea. At harvest, kernel moisture and grain weight were determined.At 55 days after planting, the activities of the enzymes nitrate reductase and glutamine synthetase (transferase assay) were determined.Samples of tissues were removed from the first leaf above and opposite to the superior ear, according to Reed et al. (1980).Sample of 0.5 g of tissue were used.The method is described below. After harvest, plants were taken to the laboratory and separated into husk, ear, grain and tassel.The leaves and stems were separated into three parts, denominated lower, mid, and upper.The leaves of the mid region were those present at two nodes above and two below the upper ear, and the corresponding stem represented the mid stem.The leaves of the lower region were those situated immediately below the mid-region, and the upper region those situated immediately above the mid-region.Following separation, the plant parts were dried in an oven at 70 o C for three days and finally weighed and grounded.N was determined in a sample of 200 mg by Kjeldahl digestion followed by distillation and titration, according to Bremner & Mulvaney (1982). N content was determined in different parts of the plant, defined as: N LL = lower leaf nitrogen; N LSt = lower stem nitrogen; N LSh = lower shoot nitrogen (N LL + N LSt ); N ML = mid leaf nitrogen; N MSt = mid stem nitrogen; N H = husk nitrogen; N MSh = mid shoot nitrogen (N ML + N H + N MSt ); N UL = upper leaf nitrogen; N USt = upper stem nitrogen; NT as = tassel nitrogen; N USh = upper shoot nitrogen; NC ob = cob nitrogen; N G = grain nitrogen; N Pl = plant nitrogen (shoot minus grains) and N T = total plant nitrogen; N s = fertilizer applied. Based on the model proposed by Moll et al. (1982), N use efficiency was defined as grain production per unit of N available in the soil.N use efficiency is Wg/Ns in which Wg is grain weight and Ns is N supply expressed in the same unit (e.g., g/plant).There are two primary components of N use efficiency: 1) the efficiency of absorption (uptake), and 2) the efficiency with which the N absorbed is utilized to produce grain.These are expressed as follow: uptake efficiency = Nt/Ns and utilization efficiency = Wg/Nt, where Nt is total N in the plant at maturity.Utilization efficiency, Wg/Nt, can be expressed as: Wg/Nt º (Wg/Ng) and Ng/Nt º (Na/Nt)(Ng/Na) in which: Wg/Ng = grain produced per unit of grain N; Ng/Nt = fraction of total N that is translocated to grain; Na/Nt = fraction of total N that is accumulated after silking; Ng/Na = ratio of N translocated to grain to N accumulated after silking. For the greenhouse experiment the same statistical design was followed, using four replications.The experimental plot was made up of pots with four plants.The pots (3 L) contained vermiculite with 10 plants, initially.After 14 days plants were removed to leave four plants/pot.Pots were irrigated with nutrient solution from the seventh day after planting, using stocks I (Hoagland normal solution) and II (Hoagland solution without N).The treatments with high N levels received 100 mg of N/week and the low N treatments, 10 mg of N/week. Twenty-five days after planting, the activities of the enzymes nitrate reductase and glutamine synthetase (transferase reaction) were determined in the leaves, using the third expanded leaf (top down) of all four plants in the pot.Glutamine synthetase was also determined in the roots of the same four plants.After washing and drying with a paper towel, the roots were cut to obtain the middle portion of the root system. Enzyme analyses In both experiments (field and greenhouse), the in vivo nitrate reductase assay was carried out according to Reed et al. (1980), whereby the plant tissue (0.5 g) was vacuum infiltrated with a solution containing nitrate.The nitrite produced was determined after diffusion to the assay medium.The incubation medium contained 0.1 M phosphate buffer pH 7.5, 1% n-propanol and 0.1 M KNO 3 .After 10 and 40 min, 0.2 mL aliquots were removed for nitrite determination, by mixing with 1.8 mL of H 2 O, 1.0 mL of 1% sulfanilamide in 1.5 N HCl and 1.0 mL of 0.02% N-(1-naphthyl)-ethylenediamine dihydrochloride.The change in absorbance at 540 nm was used to calculate the amount of nitrite produced. Glutamine synthetase activity was carried out also in both experiments; it was determined in leaf (field and greenhouse experiments) and root tissue (greenhouse experiment) with samples of 0.5 g.These samples were ground in a chilled mortar with 10 volumes of 0.1 M imidazole-HCl buffer, pH 7.8, containing 1 mM DTT.The extract was filtered through muslin cloth and centrifuged at 15,000 g for 15 minutes at 2 o C.An aliquot of the supernatant was assayed by the transferase reaction according to Rhodes et al. (1975).The assay mixture contained 65 mM glutamine, 17 mM hydroxilamine, 33 mM Na arsenate, 4 mM MnCl 2 , 1.7 mM ADP and 100 mM imidazol buffer, pH 6.8, together with 0.2 mL of the extract in a final volume of 1.0 mL.The reaction was incu-bated at 32 o C for 30 min.Then 1 mL of FeCl 3 reagent (0.67 N HCl, 0.2 N TCA and 0.37 M FeCl 3 ) were added, centrifuged and the colour read at 535 nm against a blank (Ferguson & Sims, 1971).Data are expressed as mmol glutamyl hydroxamate formed per hour per gram of fresh tissue. Results and Discussion The level of N in the different parts of the plant are shown in Tables 1 and 2. Catetão had greater accumulation of N in the leaves and stems of the lower and mid regions, whereas Sol da Manhã NF presented greater accumulation in the cob and grain as well as the highest total N accumulation, indicating higher N uptake from the soil.These data indicate great dif-ferences regarding the efficiency of two processes: the first involving the differential source-sink rate of the genotypes and the second the translocation of N. In this study, the variety Sol da Manhã had greater sink capacity with higher accumulation of N in the grain and better translocation capacity as compared to Catetão, which on the other hand accumulated more N in the leaves and stem (source). Pan et al. (1995) suggest that absorption of N is regulated by the sink capacity of the ear and by the supply of photoassimilates.Moll et al. (1987) relate the efficiency of absorption and utilization of N and carbon in maize genotypes to a greater or lesser sink capacity.The data presented in this paper are consistent with the literature cited above, where the source/sink ratio was important in the characteriza-Table 1. Mean N content of lower leaves (N LL ), lower stems (N LSt ), lower shoot (N LSh ) (leaves + stems), mid leaves (N ML ), mid stem (N MSt ), husks (N H ), mid shoot (N MSh ) (leaves + stems + husks), upper leaves (N UL ), upper stem (N USt ), tassel (NT as ) and upper shoot (N USh ) (leaves + stems + tassel) in kg/ha, for the varieties Sol da Manhã NF and Catetão at two levels of N, one low (10 kg/ha) and the other high (100 kg/ha).Seropédica, RJ, Brazil, 1994. Catetão had lower values for grain weight, than Sol da Manhã NF, at both N levels (Table 2).Moll et al. (1994), using the reciprocal selection method, obtained similar results, using two varieties of temperate germplasm contrasting in N efficiency.Esteves (1985) observed that maize inbreeds and hybrids efficient in the use of N were those that presented higher values for N content and dry weight, and suggested these parameters as suitable for efficiency evaluation. The data obtained in this study show that the Sol da Manhã NF variety presented lower values for total plant dry matter and higher values for grain production, compared to Catetão (Table 2).This suggests that total dry matter may not be a reliable criterion for N use efficiency in this case.It should be pointed out that both Sol da Manhã NF and Catetão were efficient in absorbing N, but had different distribution pattern for N in the plant and had different efficiency for N translocation to the grain. Biomass was higher for the Sol da Manhã NF than for Catetão, at both levels of N (Table 2).The components that most contributed to the biomass value of Sol da Manhã NF were the grain weight and to a lesser extent weight of the mid region and the cob, whereas for Catetão the lower shoot region contributed most to biomass, followed by the grain weight (Table 3).With regard to total plant N the main contributing components for Sol da Manhã NF were the grain and, to a lesser extent, the mid region of the shoot (Table 4).It is noteworthy that the accumulation of N in the lower, mid and upper regions together with the husks was far superior for Catetão compared to Sol da Manhã NF.The inverse occurred for N accumulation in the grain, this confirms the greater capacity of Sol da Manhã variety for N translocation to grain. The N use efficiency (W G /Ns) and absorption (N T /Ns) were effective in discriminating the varieties only at the low N level, whereas the components of utilization (W G /N T ) and translocation (N G /N T ) were effective at both levels of N (Table 5).Therefore, the components W G /Ns; N T /Ns; W G /N T and N G /N T , at low level N, and W G /N T and N G /N T , at high level N, may be used as parameters for selection in genetic improvement programs aimed at efficient N use.These components, can help as auxiliary parameters in choosing superior genotypes in plant breeding programs and are easy to measure.Translocation stands out as one of the main process underlying N use efficiency, which may mean that metabolic processes involved in synthesis can be important.The incorporation of N into protein, initiating with enzymes of NH 4 + assimilation (glutamine synthetase and glutamate synthase), might therefore play a key role in N use efficiency. Nitrate reductase activity did not differentiate the efficient and inefficient varieties (Table 6).In the greenhouse, experiment activity was affected by N level, but within each level the two varieties behaved similarly.Eichelberger et al. (1989aEichelberger et al. ( , 1989b) ) 2 and 5 (1) . Table 5. Estimate of the components of absorption efficiency (N T /Ns), the use (W G /Ns), utilization (W G /N T and W G /N G ) and translocation (N G /N T ) of nitrogen for the varieties Sol da Manhã NF and Catetão at two levels of nitrogen (10 e 100 kg/ha).Seropédica, RJ, Brazil, 1994 (1) . (1)N T : total plant N; Ns: N supplied (10 e 100 kg/ha); W G : grain weight; N G : total grain N. * and ** Significant at 5% and 1% by the F test. again selection for higher enzyme activity did not lead to greater yields.Purcino et al. (1994) evaluated nitrate reductase activity in ancient and modern maize genotypes, grown under two levels of N, and found that the modern varieties presented better yields but that nitrate reductase activities measured at three different stages did not correlate with grain yield. Another key enzyme of N assimilation studied here is glutamine synthetase (GS).No significant differences were found between the two varieties for activity of this enzyme at both levels of N (Table 6.)In the greenhouse experiment, the roots showed higher activities at the higher N level while in the leaves the tendency was to be higher in the low N level. Conclusions 1.The differential translocation of N and photoassimilates to the grain are important to distinguish between varieties efficient and non-efficient in the N use. 2. The Sol da Manhã NF and Catetão varieties are efficient and non-efficient in the N use, respectively. 3. The source/sink ratio is different in the Sol da Manhã NF and Catetão varieties.Sink is the most important parameter for identifying the variety of N efficient use. 4. Grain and total N content, cob weight and the ratio of ear dry weight to total dry weight can be used as auxiliary selection parameters for maize breeding programs. 5. The varieties Sol da Manhã NF and Catetão, in spite of presenting common germplasm, both with predominance of the Cateto race, have different performance in the N use, indicating high variability into this race to this parameter. Table 3 . Distribution of dry matter in percentage lower shoot weight (W LSh ); mid shoot weight (W MSh ); upper shoot weight (W USh ); cob (W Cob ) and grain (W G ) in relation to total biomass (BM), for the varieties Sol da Manhã NF and Catetão at two levels of N (10 e100 kg /ha).Calculated from the data of Table5. Table 4 . reached at similar conclusions with the stiff-stalk population, where Distribution of N as percent accumulation in the lower shoot (N LSh ), mid shoot (N MSh ), upper shoot (N USh ), cob (N Cob ), husk (N H ), tassel (N Tas ) and grain (N G ) in relation to the total plant N (N T ), for the varieties Sol da Manhã NF and Catetão at two levels of N (10 e 100 kg/ha).Calculated from the data presented in Tables
2018-12-06T19:28:00.878Z
2001-01-01T00:00:00.000
{ "year": 2001, "sha1": "df7b4ad7c846e02540dd7a52b2ebec942db214a5", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/pab/a/MHdLQ8RB4Sq3KqGxr4m6q7d/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "df7b4ad7c846e02540dd7a52b2ebec942db214a5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
232387884
pes2o/s2orc
v3-fos-license
The trochanteric double contour is a valuable landmark for assessing femoral offset underestimation on standard radiographs: a retrospective study Background Inaccurate projection on standard pelvic radiographs leads to the underestimation of femoral offset—a critical determinant of postoperative hip function—during total hip arthroplasty (THA) templating. We noted that the posteromedial facet of the greater trochanter and piriformis fossa form a double contour on radiographs, which may be valuable in determining the risk of underestimating femoral offset. We evaluate whether projection errors can be predicted based on the double contour width. Methods Plain anteroposterior (AP) pelvic radiographs and magnetic resonance images (MRIs) of 64 adult hips were evaluated retrospectively. Apparent femoral offset, apparent femoral head diameter and double contour widths were evaluated from the radiographs. X-ray projection errors were estimated by comparison to the true neck length measured on MRIs after calibration to the femoral heads. Multivariate analysis with backward elimination was used to detect associations between the double contour width and radiographic projection errors. Femoral offset underestimation below 10% was considered acceptable for templating. Results The narrowest width of the double line between the femoral neck and piriformis fossa is significantly associated with projection error. When double line widths exceed 5 mm, the risk of projection error greater than 10% is significantly increased compared to narrower double lines, and the acceptability rate for templating drops below 80% (p = 0.02). Conclusion The double contour width is a potential landmark for excluding pelvic AP radiographs unsuitable for THA templating due to inaccurate femoral rotation. Background The reconstruction of abductor lever arms is critical for hip function after total hip arthroplasty (THA). In clinical practice, femoral offset-defined as the distance between the center of the femoral head and anatomical axis of the proximal femur-is used to simplify the biomechanical analysis of native or prosthetic hip joints and is an important determinant of postoperative hip function [1]. Postoperative changes in femoral offset may lead to altered muscular function that can result in hip abductor muscle insufficiency and pain as well as early prosthesis wear [2,3]. Furthermore, loss of femoral offset necessitates increased abductor muscle force to maintain normal gait pattern [4], and a loss exceeding 15% can result in gait alterations [5]. Assessment of femoral offset is routinely made on plain anteroposterior (AP) radiographs of the pelvis. However, this measurement is regularly underestimated because of incorrect projection. True anatomical hip measurements are made using computed tomography (CT), but this technique is limited in routine clinical work because of the associated radiation exposure levels and high costs. When compared to CT scan measurements, radiographic femoral offset is critically underestimated in as many as 28% of THA patients [6]. For templating THA on plain radiographs, it is pivotal to assess the projection quality resulting from possible femur malrotation. The lesser trochanter is commonly used as a guide for assessing femoral anteversion, but quantifying rotation with it is imprecise [7,8]. Another possible landmark to consider is the external obturator footprint: together with the greater trochanter posteromedial facet, they are visible as two contours on AP radiographs [9]. If this easily identifiable double contour is narrow and almost superimposed, the underestimation of femoral offset appears to be minimal. Therefore, our aim was to examine whether the double contour width is predictive of the projection error on plain pelvic AP radiographs. We hypothesized that femoral rotation relative to the x-ray plane may be estimated by measuring the distance between the two contours of the posteromedial trochanteric facet, which may be considered a good predictor of femoral projection. Methods For this observational study, we retrospectively examined radiographs and magnetic resonance images (MRIs) from patients who were aged 16 years and older and consulted our institution for any hip condition between January 2014 until December 2018. This analysis was approved by the Cantonal Ethics Committee of Zurich (KEK-ZH-Nr. 2015-0258) in conjunction with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. We included patients who had pelvic AP radiographs and hip joint MRI with femoral antetorsion view, acquired within 1 year of each other. Of 72 patients with complete image documentation, eight were excluded for the following reasons: open growth plates (n = 1), poor image quality (n = 1), severe hip dysplasia (n = 2), Morbus Perthes (n = 2) or a prior hip surgery (n = 2). Our final cohort included 64 patients with a mean age of 30 years (Table 1). All patients provided written informed consent to use their clinical data for research and publication purposes. Hip image evaluations Plain pelvic radiographs were acquired at 81 kV/40 mAs according to our clinical protocol with a slight internal rotation of the legs of 10-15°to compensate for the femoral antetorsion, and the radiographic cassette was placed two fingerbreadth over the iliac crest. On these radiographs, the double contour is the lateral part of the intertrochanteric crest at the posteromedial facet of the greater trochanter and projection of the convex cortex of the piriformis fossa ( Fig. 1). We focused on measuring the double contour separation width and apparent femoral neck length. The latter was chosen because unlike femoral offset, femoral neck length is independent of femoral axis that is difficult to assess accurately due to the limited field on pelvic radiographs, CT scans and MRIs. Multiple anatomical variants of the double contour can be observed. The most common variant appears as an S-shaped line and one less curved line, another less common appears as superimposed lines and the last, more rare variant, is seen as two curvy lines ( Fig. 2a-d). To account for these variants, we defined the width of the double contour (including line thickness) at three levels perpendicular to the femoral axis: D1 at the femoral neck level, D3 as the maximum thickness of the double line and D2 as the minimum thickness between D1 and D3 ( Fig. 2e-h). For each pelvic AP radiograph, the double contour widths (D1, D2, D3), apparent femoral neck axis length (L N *) and apparent femoral head diameter (D H * for calibration) were measured manually with the measurement tools of the DICOM viewer (JiveX 5.0.2.2, VISUS Health IT GmbH, Bochum, Germany). For L N *, we measured the full bone length spanning the cortical surface of the femoral head to the opposing cortical surface along the femoral neck axis. D H * was measured by fitting a circle to the articular surface (Fig. 3a). To evaluate the interrater reliability of these measures, twenty randomly selected images were evaluated independently by three physicians. Intra-rater reliability was determined based on one rater reading the same images twice with 4 weeks between assessments. MRIs of the hip joint with a view of the distal femur for antetorsion measurement were acquired as described by Sutter et al. [1]. True femoral neck axis length (L N ) and true femoral head diameter (D H ) were also measured on MRIs of each hip in our cohort to calculate the underestimation in femoral neck length projection on the corresponding plain radiographs. Using multiplanar reconstruction (3D MPR) in the DICOM viewer, the MRIs were manually aligned parallel to the plane through both the femoral neck axis and the axis through the proximal femoral diaphysis. D H and L N were then measured using the same landmarks as those observed on the plain radiographs (Fig. 3b). Femoral antetorsion was also quantified on MRIs as described by Botser et al. [10]. Two main factors affect length measurements on a radiograph: (1) geometric magnification resulting from Fig. 1 The double contour highlighted within the boxed area (a) on the pelvic AP radiograph comprises the most common aspect of one Sshaped line (outlined by red dots) and the second line with minimal curvature (green dots) (b). On a high-resolution CT scan (c), the origin of the contours are identified as the trochanteric crest (green arrow) and piriformis fossa (red arrow). In profile view, both lines are superimposed (d) and as the femur rotates, the two lines appear as separate elements (e) the distance between the object (patient) and film (camera) and (2) geometric distortion (projection error) resulting from the angle between the object and film relative to the x-ray beam [9]. We used the femoral head diameter to correct for magnification with a correction factor (D H /D H * ): the apparent neck length corrected for magnification (L N #) was calculated as , Projection error, as indicated by E proj was calculated from the difference between the apparent femoral neck length (corrected for magnification) and true femoral neck length, Statistical analysis All analyses were carried out using R [11]. Normality of the data was confirmed with Q-Q plots and Shapiro-Wilk tests. The data was modelled with generalised multivariate regression analyses using backward elimination to determine whether there was an independent association between the calculated projection errors and anatomical parameters with the significance level set to p < 0.05. Intraclass correlation coefficients (ICCs) were calculated using the irr package [12] to determine the intra-and inter-rater reliabilities of all image measurements. In addition, double contour widths were categorised by intervals of 2 mm and an "acceptability rate" (AR) was defined as the fraction of radiographs with a projection error < 10% within a defined interval. Distributions of ARs were compared with Chisquare proportion tests. Study data were collected and managed using REDCap electronic data capture tools hosted at our clinic [13,14]. Results The anatomical measurements of antetorsion, neck length, head diameter and double contour width were normally distributed for our patient cohort (Table 2); all measurements, except antetorsion, were significantly larger for males (p ≤ 0.0001). The mean calculated projection error was 2% higher for female patients (p = 0.024). Intraclass correlation coefficients (ICC) for inter-rater reliability were 0.99, 0.98 and 0.97 for D1, D2 and D3, respectively. For intra-rater reliability, the respective ICCs for D1, D2 and D3 were 0.99, 0.93 and 0.97. Interrater reliability for D H and L N were 0.95 and 0.99 respectively. Discussion From our retrospectively analysed imaging data, we wanted to examine whether the double contour created by projection of the greater trochanter posteromedial facet is predictive of the projection error of femoral offset on plain pelvic AP radiographs. Although we found weak correlations between various parameters related to the posteromedial facet and femoral neck projection error, there was a significant trend showing that wider double contours are associated with higher femoral projection errors. Based on our regression modelling, the D2 double contour width-the narrowest width between the contours-correlated with the projection error of femoral offset. Accurate reconstruction of femoral offset is mandatory in THA. Most surgeons use plain pelvic AP radiographs for templating and postoperative quality control, which often leads to inconsistencies in the correct projection of offset due to inaccurate femoral positioning between the x-ray beam and film. Internal rotation is also often limited in the presence of osteoarthritis, which adds to the difficulties in achieving correct femoral positioning. A reliable predictor for the accuracy of femoral projection on plain pelvic radiographs such as the D2 double contour width would naturally aid the surgeon in daily clinical practice. When compared to CT scan measurements, radiographic femoral offset is underestimated by over 5 mm (or~15% of neck length) in 28% of THA patients [6]. Based on the fact that a loss of femoral offset by 15% is associated with gait alterations [5], we considered a femoral projection error (leading to an underestimation of femoral offset) of less than 10% as acceptable. Based on this definition, the proportion of acceptable radiographs dropped below 80% when D2 was wider than 5 mm. The D1 (width at the level of the femoral neck) and D3 (width at the level of the piriformis fossa) double contour widths were less reliable predictors of femoral malrotation; this might be explained by the variable shape of the double contour, which could affect D1 and D3 to a greater extent than D2. Being female and younger were associated with a higher offset error. We speculate that for these patients who are generally smaller, the measurement errors are proportionally larger. We also observed a small but significant difference in offset error between right and left hips. This fits with findings from Sutter et al who reported higher antetorsion in the left legs of both healthy and femoroacetabular impingement patients [15]. This asymmetry could potentially favour projection errors on one side since patient positioning protocols are not sidespecific. Assessment of bilateral imaging in large cohorts would be needed to confirm or reject any of these conjectures. Interestingly, there was no statistical association between antetorsion and projection error, although in theory, under perfectly standardized patient positioning, this association should exist. This reflects how perfect positioning is hard to achieve in real life, and further justify the need for a simple projection quality control procedure. The inter-and intra-rater reliability of our double contour width measurements were very high. All examiners used the same assessment protocol, which provided a clear guideline instructing them on how to evaluate the double contour. We believe the evaluation of the double contour is easy and reproducible, and recommend its' use in assessing the reliability of a radiograph before measuring femoral offset. There are several limitations of our study. The study design is retrospective in nature and its power is limited only to patients with a medical indication (most likely affecting the hip joint) who required MRI and radiograph assessments. Most, if not all, of our cohort included patients with hip problems, especially at a young age. We cannot consider our patients as healthy candidates regarding joint function, since many had an anatomical issue as the cause of their dysfunction; our study cohort is not representative of the general population. There can be anatomical variants that do not present the associations observed in our cohort, but a much larger cohort would be needed to identify these variants. The regression parameters would differ with another patient cohort, nonetheless, we assume our results are still highly relevant for THA patients, since we selected patients who consulted for hip problems. Further evaluation of the double contour in a cohort without hip disorders as an indication for the acquisition of an MRI/ CT and radiograph would be helpful to test our study hypothesis in a healthy population. Conclusion Our data indicate that if the minimal width of the trochanteric double contour between the level of the lateral femoral neck and piriformis fossa exceeds 5 mm on radiographs from patients with a hip disorder, the risk of underestimating femoral offset by more than 10% is significantly greater compared to narrower double contours. On this basis, the trochanteric double contour is a potential landmark for excluding pelvic AP radiographs unsuitable for THA templating. Using this particular landmark could greatly assist lower extremity surgeons in discerning between those radiographs that are suitable or not during the routine preparation for THA.
2021-03-29T13:41:40.298Z
2020-07-20T00:00:00.000
{ "year": 2021, "sha1": "0740ebbae63af56363dd9f2684f800081ab5b19a", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-021-04133-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0740ebbae63af56363dd9f2684f800081ab5b19a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214071210
pes2o/s2orc
v3-fos-license
Measured solid state and subcooled liquid vapour pressures of nitroaromatics using Knudsen effusion mass spectrometry . Knudsen effusion mass spectrometry (KEMS) was used to measure the solid state saturation vapour pressure ( P satS ) of a range of atmospherically relevant nitroaromatic compounds over the temperature range from 298 to 328 K. The selection of species analysed contained a range of geo-metric isomers and differing functionalities, allowing for the impacts of these factors on saturation vapour pressure ( P sat ) to be probed. Three subsets of nitroaromatics were investigated: nitrophenols, nitrobenzaldehydes and nitrobenzoic acids. The P satS values were converted to subcooled liquid saturation vapour pressure ( P satL ) values using experimental enthalpy of Abstract. Knudsen effusion mass spectrometry (KEMS) was used to measure the solid state saturation vapour pressure (P sat S ) of a range of atmospherically relevant nitroaromatic compounds over the temperature range from 298 to 328 K. The selection of species analysed contained a range of geometric isomers and differing functionalities, allowing for the impacts of these factors on saturation vapour pressure (P sat ) to be probed. Three subsets of nitroaromatics were investigated: nitrophenols, nitrobenzaldehydes and nitrobenzoic acids. The P sat S values were converted to subcooled liquid saturation vapour pressure (P sat L ) values using experimental enthalpy of fusion and melting point values measured using differential scanning calorimetry (DSC). The P sat L values were compared to those estimated by predictive techniques and, with a few exceptions, were found to be up to 7 orders of magnitude lower. The large differences between the estimated P sat L and the experimental values can be attributed to the predictive techniques not containing parameters to adequately account for functional group positioning around an aromatic ring, or the interactions between said groups. When comparing the experimental P sat S of the measured compounds, the ability to hydrogen bond (H bond) and the strength of the H bond formed appear to have the strongest influence on the magnitude of the P sat , with steric effects and molecular weight also being major factors. Comparisons were made between the KEMS system and data from diffusion-controlled evaporation rates of single parti-cles in an electrodynamic balance (EDB). The KEMS and the EDB showed good agreement with each other for the compounds investigated. Introduction Organic aerosols (OAs) are an important component of the atmosphere with regards to resolving the impact aerosols have on both climate and air quality (Kroll and Seinfeld, 2008). To predict how OA will behave requires knowledge of their physiochemical properties. OAs consist of primary organic aerosols (POAs) and secondary organic aerosols (SOAs). POAs are emitted directly into the atmosphere as solid or liquid particulates and make up about 20 % of OA mass globally (Ervens et al., 2011), but the exact percentage of POA varies by a significant amount from region to region. SOAs are not emitted into the atmosphere directly as aerosols but instead form through atmospheric processes such as gas-phase photochemical reactions followed by gasto-particle partitioning in the atmosphere (Pöschl, 2005). A key property for predicting the partitioning of compounds between the gaseous and aerosol phase is the pure component equilibrium vapour pressure, also known as the saturation vapour pressure (P sat ) (Bilde et al., 2015). It has been estimated that the number of organic compounds in the atmosphere is in excess of 100 000 (Hallquist et al., 2009); therefore it is not feasible to measure the P sat of each experimentally. Instead, P sat values are often estimated using group contribution methods (GCMs) that are designed to capture the functional dependencies on predicting absolute values. GCMs start with a base molecule with known properties, typically the carbon skeleton. A functional group is then added to the base molecule. This addition will change the P sat , and the difference between the base molecule and the functionalised molecule is the contribution from that particular functional group. If this concept is true then the contribution from the functional group should not be affected by the base molecule to which it is added (Bilde et al., 2015). Whilst this is true in many cases, there are numerous exceptions. These exceptions normally occur when proximity effects occur, such as neighbouring group interactions or other mesomeric effects. In this work there will be a focus on the Nannoolal et al. method (Nannoolal et al., 2008), the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997), and SIMPOL (Pankow and Asher, 2008). Detailed assessments of such methods have been made by and O'Meara et al. (2014), often showing predicted values differ significantly from experimental data. The limitations and uncertainties of GCMs come from a range of factors including underrepresentation of long-chain hydrocarbons (> C 18 ); underrepresentation of certain functional groups, such as nitro or nitrate groups; a lack of data for the impact of intramolecular bonding; and the temperature dependence due to the need for extrapolation over large temperature ranges to reach ambient conditions (Bilde et al., 2015). This has important implications for partitioning modelling, in a mechanistic sense, such as an over-or underestimation of the fraction partitioning to the particulate state. Different GCMs have different levels of reliability for different classes of compounds and perform much more reliably if the compound of interest resembles those used in the parameterisation data set of the GCM (Kurtén et al., 2016). For example, in the assessment by O'Meara et al. (2014), for the compounds to which it is applicable, EVAPORATION (Estimation of VApour Pressure of ORganics, Accounting for Temperature, Intramolecular, and Non-additivity effects, Compernolle et al., 2011) was found to give the minimum mean absolute error, the highest accuracy for SOA loading estimates and the highest accuracy for SOA composition. Despite this, EVAPORATION should not be used for aromatic compounds, as there are no aromatic compounds in the parameterisation data set (Compernolle et al., 2011). Methods developed with OA in mind, such as EVAPORATION (Compernolle et al., 2011), are not without their limitations due to the lack of experimental data available for highly functionalised, low-volatility organic compounds . As the degree of functionality increases, so does the difficulty in predicting the P sat as more intramolecular forces, steric effects and shielding effects must be considered. The majority of GCMs designed for estimating P sat of organic compounds were developed for the chemical indus-try with a focus on monofunctional compounds with P sat on the order of 10 3 -10 5 Pa (Bilde et al., 2015). SOAs, in contrast, are typically multifunctional compounds with P sat often many orders of magnitude below 10 −1 Pa ( . GCM development, with a focus on the P sat of SOA, has to deal with a lack of robust experimental data and, historically, large differences in measurement data depending on the technique and instrument used to acquire the data. To address this problem Krieger et al. (2018) identified a reference data set for validating P sat measurements using the polyethylene glycol (PEG) series. To improve the performance of GCMs when applied to highly functionalised compounds, more data are required that probe both the effect of relative functional group positioning and the effects of interaction between functional groups on P sat , such as in the work by Booth et al. (2012) and Dang et al. (2019). In this study the solid state saturation vapour pressure (P sat S ) and subcooled liquid saturation vapour pressure (P sat L ) of three families of nitroaromatic compounds are determined using Knudsen effusion mass spectrometry (KEMS), building on the work done by Dang et al. (2019) and Bannan et al. (2017). These include substituted nitrophenols, substituted nitrobenzoic acids and nitrobenzaldehydes. Nitroaromatics are useful tracers for anthropogenic emissions (Grosjean, 1992), and many nitroaromatic compounds are noted to be highly toxic (Kovacic and Somanathan, 2014). Studies quantifying the overall role of nitrogen-containing organics on aerosol formation would also benefit from more refined P sat (Duporté et al., 2016;Smith et al., 2008). Even if mechanistic models perform poorly in predicting aerosol mass due to missing process phenomena, resolving the partitioning is still important. Several studies have reported the observation of methyl nitrophenols (Chow et al., 2016;Kitanovski et al., 2012;Schummer et al., 2009) andnitrobenzoic acids (van Pinxteren andHerrmann, 2007). Nitrobenzaldehydes can form from the photo-oxidation of toluene in a high-NO x environment (Bouya et al., 2017). Both nitrophenols and nitrobenzoic acids were identified in the review paper by Bilde et al. (2015) as compounds of interest and recommendations for further study. Aldehyde groups tend to have little impact on P sat by themselves but the =O of the aldehyde group can act as a hydrogen bond acceptor. There is a general lack of literature vapour pressure data for nitroaromatic compounds, and despite recent work on nitrophenols by Bannan et al. (2017), there is still a lack of data on such compounds in the literature. This is reflected, in part, in the effectiveness of the GCMs to predict the P sat of such compounds. Here we present P sat S and P sat L data for 20 nitroaromatic compounds. The P sat S data were collected using KEMS with a subcooled correction performed with thermodynamic data from a differential scanning calorimeter (DSC). The trends in the P sat S data are considered, and chemical explanations are given to explain the observed differences. As identified by Bilde et al. (2015), experimental P sat can differ by several orders of magnitude among techniques. One way of mitigating this is to collect data for a compound using multiple techniques, whilst running reference compounds to assess consistency among the employed methods. We therefore use supporting data from the electrodynamic balance (EDB) at ETH Zurich for three of the nitroaromatic compounds. The P sat L data are then compared with the predicted P sat L of the GCMs, highlighting where they perform well and where they perform poorly. Finally, these measurements using the new PEG reference standards are compared to past KEMS measurements using an old reference standard due to differences in experimental P sat between this work and previous KEMS work. Experimental Compound selection A total of 10 nitrophenol compounds were selected for this study including 9 monosubstituted, 4 nitrobenzaldehydes including 1 monosubstituted and 6 nitrobenzoic acids including 5 monosubstituted. The nitrophenols are shown in Table 1, the nitrobenzaldehydes are shown in Table 2 and the nitrobenzoic acids are shown in Table 3. All compounds selected for this study were purchased at a purity of 99 % and were used without further preparation. All compounds are solid at room temperature. Knudsen effusion mass spectrometry system (KEMS) The KEMS system is the same system that has been used in previous studies Booth et al., 2009, and a summary of the measurement procedure will be given here. For a more detailed overview see Booth et al. (2009). To calibrate the KEMS, a reference compound of known P sat is used. In this study the polyethylene glycol series (PEG series), PEG-3 (P 298 = 6.68 × 10 −2 Pa) and PEG-4 (P 298 = 1.69 × 10 −2 Pa) (Krieger et al., 2018), were used. The KEMS has been shown to accurately measure the P sat of PEG-4 in the study by Krieger et al. (2018), but the KEMS did not measure the P sat of PEG-3. In this study when using PEG-4 as a reference compound for PEG-3 the measured P sat of PEG-3 had an error of 30 % compared to the experimental values from Krieger et al. (2018), which is well within the quoted 40 % error margin of the KEMS (Booth et al., 2009). When using PEG-3 as the reference compound for PEG-4, the measured P sat of PEG-4 had an error of 20 %. The reference compound is placed in a temperaturecontrolled Knudsen cell. The cell has a chamfered orifice through which the sample effuses, creating a molecular beam. The size of the orifice is ≤ 1/10 the mean free path of the gas molecules in the cell. This ensures that the parti-cles effusing through the orifice do not disturb the thermodynamic equilibrium of the cell. The molecular beam is then ionised using a standard 70 eV electron impact ionisation and analysed using a quadrupole mass spectrometer. After correcting for the ionisation cross section (Booth et al., 2009), the signal generated is proportional to the P sat . Once the calibration process is completed it is possible to measure a sample of unknown P sat . When the sample is changed it is necessary to isolate the sample chamber from the measurement chamber using a gate valve so that the sample chamber can be vented, whilst the ioniser filament and the secondary electron multiplier (SEM) detector can remain on and allow for direct comparisons with the reference compound. The P sat of the sample can be determined from the intensity of the mass spectrum, if the ionisation cross section at 70 eV and the temperature at which the mass spectrum was taken are known. The samples of unknown P sat are typically solid so it is the P sat S that is determined. After the P sat S (Pa) has been determined for multiple temperatures, the Clausius-Clapeyron equation (Eq. 1) can be used to determine the enthalpy and entropy of sublimation as shown in Booth et al. (2009). where T is the temperature (K), R is the ideal gas constant (J mol −1 K −1 ), H sub is the enthalpy of sublimation (J mol −1 ) and S sub is the entropy of sublimation (J mol −1 K −1 ). P sat was obtained over a range of 30 K in this work, starting at 298 K and rising to 328 K. The reported solid state vapour pressures are calculated from a linear fit of ln (P sat ) vs. 1/T using the Clausius-Clapeyron equation. Differential scanning calorimetry (DSC) According to the reference state used in atmospheric models, and as predicted by GCMs, P sat L is required. Therefore it is necessary to convert the P sat S determined by the KEMS system into a P sat L . As with previous KEMS studies ( Bannan et al., 2017;Booth et al., , 2017) the melting point (T m ) and the enthalpy of fusion ( H fus ) are required for the conversion. These values were measured with a TA Instruments DSC 2500 differential scanning calorimeter (DSC). Within the DSC, heat flow and temperature were calibrated using an indium reference and heat capacity using a sapphire reference. A heating rate of 10 K min −1 was used. A sample of 5-10 mg was measured using a microbalance and then pressed into a hermetically sealed aluminium DSC pan. A purge gas of N 2 was used with a flow rate of 30 mL min −1 . Data processing was performed using the Trios software supplied with the instrument. c p,sl was estimated using c p,sl = S fus (Grant et al., 1984;Mauger et al., 1972). Electrodynamic balance (EDB) The recently published paper by Dang et al. (2019) measured the P sat of several of the same compounds that are studied in this paper using the same KEMS system; however, in this study the newly defined best-practice reference sample was used (Krieger et al., 2018), whereas Dang et al. (2019) used malonic acid. The difference in reference compound led to a discrepancy in the experimental P sat . Supporting measurements for the compounds were performed using the EDB from ETH Zurich in order to rule out instrumental problems with the KEMS. The EDB from ETH Zurich has been used to investigate P sat of low-volatility compounds in the past (Huisman et al., 2013;Zardini et al., 2006;Zardini and Krieger, 2009), and a brief overview will be given here. For full details see Zardini et al. (2006) and Zardini and Krieger (2009). The EDB can be applied to both liquid particles and non-spherical solid particles (Bilde et al., 2015). The EDB uses a double ring configuration (Davis et al., 1990) to levitate a charged particle in a cell with a gas flow free from the evaporating species under investigation. There is precise control of both temperature and relative humidity within the cell. Diffusion-controlled evaporation rates of the levitated particle are measured at a fixed temperature and relative humidity by precision sizing using optical resonance spectroscopy in backscattering geometry with a broadband LED source and Mie theory for the analysis (Krieger et al., 2018). P sat is calculated at multiple temperatures, and the Clausius-Clapeyron equation can be used to calculate P sat at a given temperature (Eq. 1). As single particles injected from a dilute solution may either stay in a supersaturated liquid state or crystallise, it is important to identify its physical state. For 4-methyl-3-nitrophenol a 3 % solution dissolved in isopropanol was injected into the EDB. After the injection and fast evaporation of the isopropanol, all particles were non-spherical but with only small deviations from a sphere, meaning that it was unclear whether the phase was amorphous or crystalline. To determine the phase of this first experiment, a second experiment was performed, where a solid particle was injected directly into the EDB. Mass loss with time was measured by following the DC voltage necessary to compensate for the gravitational force acting on the particle to keep the particle levitating. When comparing the P sat from both of these experiments it is clear that the initial measurement of 4-methyl-3-nitrophenol was in the crystalline phase. 3-Methyl-4-nitrophenol was only injected as a solution but the particle crystallised and was clearly in the solid state. 4-Methyl-2-nitrophenol was injected as both a 3 % and 10 % solution. Despite being able to trap a particle, the particle would completely evaporate within about 30 s. This evaporation timescale is too small to allow the EDB to collect any quantitative data. Using the equation for large particles neglecting evaporative cooling (Hinds, 1999) where t is the time that the particle was trapped within the cell of the EDB, R is the ideal gas constant, ρ is the density of the particle, d p is the diameter of the particle, D is the diffusion coefficient, M is the molecular mass, T is the temperature, and P sat is the saturation vapour pressure. Equation (2) gives approximately 4.3 × 10 −3 Pa for P sat L at 290 K. Subcooled correction The conversion between P sat S and P sat L is done using the Prausnitz equation (Prausnitz et al., 1998) where P sat L /P sat S is the ratio between P sat L and P sat S , H fus is the enthalpy of fusion (J mol −1 ), c p,sl is the change in heat capacity between the solid and liquid states (J mol −1 K −1 ), T is the temperature (K), and T m is the melting point (K). The most common P sat prediction techniques are GCMs. Several different GCMs have been developed (Moller et al., 2008;Myrdal and Yalkowsky, 1997;Nannoolal et al., 2008;Pankow and Asher, 2008) with some being more general and others, such as the EVAPORATION method (Compernolle et al., 2011), having been developed with OA as the target compounds. The Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997), the Nannoolal et al. method (Nannoolal et al., 2008), and the Moller et al. method (Moller et al., 2008) are combined methods requiring a boiling point, T b , as an input. If the T b of a compound is known experimentally it is an advantage, but most atmospherically relevant compounds have an unknown T b so the T b that is used as an input is calculated using a GCM. The combined methods use a T b calculated using a GCM for many of the same reasons that GCMs are used to calculate P sat , i.e. the difficulty in acquiring experimental data for highly reactive compounds or compounds with short lifetimes. The Nannoolal et al. method (Nannoolal et al., 2004), Stein and Brown method (Stein and Brown, 1994), and Joback and Reid method (Joback et al., 1987) are most commonly used. The Joback and Reid method is not considered in this paper due to its known biases , with the Stein and Brown method being an improved version of Joback and Reid. The T b used in the combined methods is, however, another source of potential error, and for methods that extrapolate P sat from T b , the size of this error increases with increasing difference between T b and the temperature to which it is being extrapolated (O'Meara et al., 2014). EVAPORATION (Compernolle et al., 2011) and SIMPOL (Pankow and Asher, 2008) do not require a boiling point, only requiring a structure and a temperature of interest. The main limitation for many GCMs, aside from the data required to create and refine them, is not accounting for intramolecular interactions, such as hydrogen bonding, or steric effects. The Nannoolal et al. method (Nannoolal et al., 2008), Moller et al. method (Moller et al., 2008) and EVAP-ORATION (Compernolle et al., 2011) attempt to address this by having secondary interaction terms. In the Nannoolal et al. method (Nannoolal et al., 2008), there are terms to account for ortho, meta and para isomerism of aromatic compounds; however, there are no terms for dealing with tri-or greater substituted aromatics, and in these instances all isomers give the same prediction. A common misuse of GCMs occurs when a GCM is applied to a compound containing functionality not included in the training set, e.g. using EVAPORA-TION (Compernolle et al., 2011) with aromatic compounds or using SIMPOL (Pankow and Asher, 2008) with compounds containing halogens. As the GCM does not have the tools to deal with this functionality it will either misattribute a contribution, in the EVAPORATION (Compernolle et al., 2011) example the aromatic structure would be treated as a cyclical aliphatic structure, or simply ignore the functionality, as is the case when SIMPOL (Pankow and Asher, 2008) is used for halogen-containing compounds. When selecting a GCM to model P sat it is essential to investigate whether the method is applicable to the compounds of interest. Of the popular P sat GCMs, the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) contains only three nitroaromatic compounds, the Nannoolal et al. method (Nannoolal et al., 2008) contains 13, the Moller et al. (2008) method contains no more than 14, SIMPOL (Pankow and Asher, 2008) contains 25 and EVAPORATION (Compernolle et al., 2011) contains zero. The specific nitroaromatics used by the Nannoolal et al. method and the Moller et al. method are not stated (to the author's knowledge) as the data were taken directly from the Dortmund Data Bank. Despite the SIMPOL (Pankow and Asher, 2008) method containing 25 nitroaromatic compounds, 11 of these are taken from a gas chromatography method using a single data point from a single data set (Schwarzenbach et al., 1988). Inductive and resonance effects All functional groups around an aromatic ring either withdraw or donate electron density. This is a result of two major effects, the inductive effect and the resonance effect, or a combination of the two (Ouellette et al., 2015a). The inductive effect is the unequal sharing of the bonding electron through a chain of atoms within a molecule. A methyl group donates electron density, relative to a hydrogen atom, so is therefore considered an electron-donating group, whereas a chloro group withdraws electron density and is therefore considered an electron-withdrawing group. The resonance effect occurs when a compound can have multiple resonance forms. In a nitro group, as the oxygen atoms are more electronegative than the nitrogen atom, a pair of electrons from the nitrogen-oxygen double bond can be moved onto the oxygen atom followed by a pair of electrons being moved out of the ring to form a carbon-nitrogen double bond and leaving the ring with a positive charge. This leads to the nitro group acting as an electron-withdrawing group. In an amino group, on the other hand, the hydrogens are not more electronegative than the nitrogen; instead the lone pair on the nitrogen can be donated into the ring, causing the ring to have a negative charge and the amino group to act as an electron-donating group. Examples of the inductive effect and the resonance effect are given in Fig. 1 (Ouellette et al., 2015a). Some functional groups, such as an aromatic OH group, can both donate and withdraw electron density at the same time. In phenol the OH group withdraws electron density via the inductive effect, but it also donates electron density via the resonance effect. This is shown in Fig. 2. As the resonance effect is typically much stronger than the inductive effect, OH has a net donation of electron density in phenol (see Fig. 2). The positioning of the functional groups around the aromatic ring determines to what extent the inductive and resonance effects occur. The changes in electron density due to the inductive effect and the resonance effect also change the partial charges on the atoms within the aromatic ring. These changes impact the strength of any potential H bonds that may form. 4 Results and discussion 4.1 Solid state vapour pressure P sat S values measured directly by the KEMS are given in Tables 4, 5 and 6 for the nitrophenols, nitrobenzaldehydes and nitrobenzoic acids respectively. Measurements were made at increments of 5 K from 298 to 328 K, with the exception of the following compounds that melted during the temperature ramp. 2-Nitrophenol was measured between 298 and 318 K, 3-methyl-4-nitrophenol was measured between 298 and 313 K, 4-methyl-2-nitrophenol was measured between 298 and 303 K, 5-fluoro-2-nitrophenol was measured between 298 and 308 K, and 2-nitrobenzaldehyde was measured between 298 and 313 K. The Clausius-Clapeyron equation (Eq. 1) was used to calculate the enthalpies and entropies of sublimation. The melting points of compounds studied are given in Table 7. Generally speaking, considering the different groups of compounds as a whole, the nitrobenzaldehydes studied exhibit higher P sat S (order of magnitude) than the nitrophenols and nitrobenzoic acids studied. This is most likely due to the fact that none of the nitrobenzaldehydes studied herein are capable of undergoing hydrogen bonding (H bonding), whilst all of the nitrophenols and nitrobenzoic acids, to varying extents, are capable of hydrogen bonding. The nitrophenols and nitrobenzoic acids studied exhibit a range of overlapping P sat S so nothing can be inferred when considering these two types of compounds together as groups; therefore the differences within each of the groups must be considered. Considering first the nitrophenols, Table 4, the highest P sat S compound is 2-fluoro-4-nitrophenol (2.75 × 10 −2 Pa). There are two potential H-bonding explanations for why this compound has such a high P sat S relative to the other nitrophenols and fluoro nitrophenols. First, in this isomer the presence of the F atom on the C adjacent to the OH group gives rise to intramolecular H bonding (Fig. 3a), which reduces the extent of intermolecular interaction possible and increases P sat S . This effect can clearly be seen from the fact that in 3-fluoro-4-nitrophenol, where the F atom is positioned further away from the OH group, the P sat S is significantly lower (4.55 × 10 −3 ) due to the fact that intermolecular H bonding can occur (Fig. 3b). However, in the work by Shugrue et al. (2016) it is stated that neutral organic fluoro and nitro groups form very weak hydrogen bonds, which whilst they do exist, can be difficult to even detect by many conventional methods. The second explanation depends on the inductive effect mentioned previously. By using MOPAC2016 (Stewart, 2016), a semi-empirical quantum chemistry program based on the neglect of diatomic differential overlap (NDDO) approximation (Dewar and Thiel, 1977), the partial charges of the phenolic carbon can be calculated. The partial charge of the phenolic carbon can be dependent on the orientation of the OH if the molecule does not have a plane of symmetry, so in this work the partial charge used is an average of the two extreme orientations of the OH, as shown in Fig. 4. A plot of P sat S vs. the partial charge of the phenolic carbon for the nitrophenols can be found in Fig. 5. The partial charge of the phenolic carbon in 2-fluoro-4nitrophenol is 0.275 with a P sat S of 2.75 × 10 −2 Pa, whereas for 3-fluoro-4-nitrophenol it is 0.379 with a P sat S of 4.55 × 10 −3 Pa. The more positive the partial charge of the phenolic carbon the better it is able to stabilise the increased negative charge which will develop on the O atom as a result of Hbond formation. As a result stronger intermolecular H bonds are formed, therefore giving rise to a lower P sat S . Moving the nitro group from being para to the OH in 3-fluoro-4nitrophenol to meta to the OH in 5-fluoro-2-nitrophenol further reduces the P sat S to 4.25×10 −3 Pa. This reduction in P sat S can also be explained via the combination of the inductive effect and the resonance effect as the partial charge of the phenolic carbon rises from 0.379 to 0.396, again implying stronger intermolecular H bonds and, therefore, a lower P sat S . For the fluoro nitrophenols, as shown in Fig. 5, as the partial charge of the phenolic carbon increases the P sat S increases. A similar trend occurs in the methyl nitrophenols as in the fluoro nitrophenols with a larger partial charge of the phenolic carbon corresponding to a lower P sat S , as shown in Fig. 5. 3-Methyl-2-nitrophenol is an exception to this and is discussed shortly. 3-Methyl-4-nitrophenol has the most positive partial charge with 0.362 and the lowest P sat S of 1.78 × 10 −3 Pa, 4-methyl-2-nitrophenol has the next most positive partial charge of 0.343 and the next lowest P sat S of 3.11 × 10 −3 , and 4-methyl-3-nitrophenol has the least positive partial charge of 0.249 and the highest P sat S of 1.08 × 10 −2 . 3-Methyl-2-nitrophenol does not follow this trend, however, with it having a partial charge of 0.378 and a P sat S of 9.90 × 10 −3 . As shown in Fig. 5, 3-methyl-2-nitrophenol would be expected to have a much lower P sat S than is observed due to the high partial charge on the phenolic carbon. A possible explanation as to why 3-methyl-2-nitrophenol does not follow this same trend is the positioning of its functional groups. As shown in Fig. 6a, all of the functional groups are clustered together and the proximity of the functional groups sterically hinders the formation of H bonds, thus increasing the P sat S . Conversely as shown in Fig. 6b the fact that the methyl group is further away in 4-methyl-2-nitrophenol leads to less steric hindrance of H-bond formation. Whilst 3-methyl-2-nitrophenol has a higher P sat S than is expected given the partial charge on the phenolic carbon, 4amino-2-nitrophenol has a much lower P sat S (Fig. 5). This is likely due to 4-amino-2-nitrophenol being capable of forming more than one hydrogen bond, whereas all the other compounds investigated were only capable of forming one H bond. However, despite 4-amino-2-nitrophenol being capable of forming more than 1 H bond, replacing the methyl group on 4-methyl-2-nitrophenol with an amino group to form 4-amino-2-nitrophenol surprisingly increases the P sat S from 3.11 × 10 −3 to 3.36 × 10 −3 Pa. The higher P sat S can be explained via the combination of the inductive effect and the resonance effect. Whilst the partial charge of the phe- nolic carbon in 4-methyl-2-nitrophenol is 0.343, the partial charge of the phenolic carbon in 4-amino-2-nitrophenol is only 0.264, and the partial charge of the carbon bonded to the amine group is only 0.211. So whilst 4-amino-2-nitrophenol is capable of forming two intermolecular H bonds compared to 4-methyl-2-nitrophenol's one, they will be much weaker. 4-Amino-2-nitrophenol is a good example of a compound with multiple competing factors affecting P sat S leading to higher P sat S than would be expected due to one factor and lower P sat S than expected from another. Similar to 4-amino-2-nitrophenol, 4-chloro-3-nitrophenol also has a lower P sat S than expected according to the partial charge of the phenolic carbon. This can be seen in Fig. 5. Unlike 4-amino-2-nitrophenol the explanation for 4chloro-3-nitrophenol is simpler. Replacing the methyl group on 4-methyl-3-nitrophenol with a chloro group to form 4chloro-3-nitrophenol reduces the P sat S from 1.08 × 10 −2 to 2.26×10 −3 Pa. This reduction in P sat S can be explained by the increase in partial charge of the phenolic carbon from 0.249 to 0.266, as well as a 13 % increase in molecular weight. Replacing the F atom in 3-fluoro-4-nitrophenol with a methyl group to form 3-methyl-4-nitrophenol further reduces the P sat S (1.78 × 10 −3 ), although exactly why is unclear. The methyl group cannot engage in intermolecular H bonding; it will sterically hinder any H bonding that the NO 2 group undergoes; and it reduces the partial charge of the phenolic carbon of the molecule (from 0.379 to 0.362) (Stewart, 2016), which would reduce the strength of H-bonding interactions between the molecules. It is possible that the crystallographic packing density of 3-methyl-4-nitrophenol is higher although no data are available to support this, although when looking at P sat L data (Sect. 4.2) 3-methyl-4-nitrophenol exhibits a higher P sat L than 3-fluoro-4-nitrophenol, which is what would be expected given the respective partial charges of the phenolic carbons. Removing the methyl group from 4-methyl-2-nitrophenol to give 2-nitrophenol causes the P sat S to drop from 3.11 × 10 −3 to 8.94 × 10 −4 Pa. This reduction in P sat S matches an increase in the positive partial charge of the phenolic carbon, from 0.343 to 0.383, implying an increase in the strength of the intermolecular H bonds and therefore a reduction in P sat S . Now considering the nitrobenzaldehydes (Table 5) the highest P sat S compound is 2-nitrobenzaldehyde (3.32×10 −1 ). Comparing this to 2-nitrophenol (8.94 × 10 −4 ) shows how significant the ability to form H bonds is to the P sat S of a compound, with replacing a hydroxyl group (capable of H bonding) with an aldehyde group (incapable of H bond- ing) raising the P sat S of the compound by more than 2 orders of magnitude. The decrease in P sat S observed by moving the nitro group from being ortho to the aldehyde group in 2-nitrobenzaldehyde to being meta in 3-nitrobenzaldehyde (1.21 × 10 −1 ) and para in 4-nitrobenzaldehyde (3.40 × 10 −2 ) can be explained using the different crystallographic packing densities of the three isomers as shown in Fig. 7. Crystallographic packing density is a measure of how densely packed the molecules of a given compound are when they crystallise -the more closely packed molecules are the greater the overall extent of interaction between them and the lower the P sat S . The order of the P sat S observed here for the three isomers of nitrobenzaldehyde matches that of their crystallographic packing densities (Coppens and Schmidt, 1964;Engwerda et al., 2018;King and Bryant, 1996), with the lowest P sat S correlating with the highest packing density and vice versa. The addition of a Cl atom to 3-nitrobenzaldehyde is also observed to decrease the P sat S compounds. This can be simply rationalised due to the greater than 25 % increase this causes to the molecular weight. The higher a compound's molecular weight the greater the overall extent of interaction between its molecules and the lower its P sat S . Finally, considering the nitrobenzoic acids (Table 6), the highest P sat S compound is 4-methyl-3-nitrobenzoic acid (4.67 × 10 −3 ). Its isomer, 3-methyl-4-nitrobenzoic acid, possesses a slightly lower P sat S (3.97×10 −3 ) as well as a slightly lower partial charge of the carboxylic carbon (0.644 vs. 0.628) although the difference in P sat S is not significant. Removing the methyl group from 4-methyl-3-nitrobenzoic acid to give 3-nitrobenzoic acid (1.10 × 10 −3 ) reduces the observed P sat S most likely due to the reduction in steric hindrance around the nitro group, which would allow for more effective H bonding. In addition 3-nitrobenzoic acid possesses a lower P sat S than the corresponding 3nitrobenzaldehyde due to its ability to form H bonds. Adding a hydroxyl group or a Cl atom to 3-nitrobenzoic acid to give 2-hydroxy-5-nitrobenzoic acid (1.79 × 10 −3 ) or 2-chloro-3nitrobenzoic acid (1.97 × 10 −3 ) respectively increases the observed P sat S as the addition of the extra functional group leads to increased intramolecular H bonding occurring. Additionally, comparing 2-hydroxy-5-nitrobenzoic acid with 2- fluoro-4-nitrophenol demonstrates how the increased ability of carboxylic acid to partake in H bonding compared to an F atom leads to a suppression of P sat S . 5-Chloro-2nitrobenzoic acid has a higher P sat S (2.98 × 10 −3 Pa) than 2-chloro-3-nitrobenzoic acid (1.97 × 10 −3 Pa), its structural isomer. The increase in P sat S can be attributed to the increased partial charge of the carbon within the carboxylic acid group (0.627 increasing to 0.640). When comparing nitrobenzoic acids as a whole with nitrophenols, nitrobenzoic acids have a much higher P sat S than would be expected based solely on the partial charges of the carboxylic carbon. As can be seen in Fig. 8, there is overlap in the range of P sat S for the nitrobenzoic acids and many of the nitrophenols; however, there is no overlap in terms of partial charges of the carboxylic and phenolic carbons, with all of the nitrobenzoic acids having partial charges of the carboxylic carbon greater than 0.6, whilst the nitrophenols had much lower partial charges of the phenolic carbon between 0.2 and 0.4. It is widely known that the H bonds of carboxylic acids are stronger than the H bonds of alcohols (Ouellette et al., 2015b), so therefore it would be expected that the carboxylic acids would have a lower P sat S . A likely reason as to why the P sat S of the nitrobenzoic acids is higher than would be expected, compared to the nitrophenols, based only on the partial charge of the carboxylic carbon is the propensity for carboxylic acids to dimerise (see Fig. 9). Nitrophenols are unable to dimerise, instead being able to form H bonds with up to two other molecules as shown in Fig. 9. By dimerising, the nitrobenzoic acids, despite having much stronger H bonds than the nitrophenols, will not have a proportionally lower P sat S . In summary the ability to form H bonds appears to be the most significant factor affecting the P sat S of a compound, where molecules that are able to form these strong intermolecular interactions generally always exhibit lower P sat S than those that cannot. Additionally different functional groups are able to form different numbers of H bonds, with those that are able to form more H bonds generally suppressing P sat S to a greater extent than those that form less. The relative positioning of those functional groups responsible for the H bonding is also important as when positioned too close to-gether intramolecular H bonding can occur, which competes with intermolecular H bonding and generally raises P sat S . The positioning of non-H-bonding functional groups within the molecule can also have an impact upon the extent of H bonding, with bulky substituents positioned close to H-bonding groups causing steric hindrance, which reduces the extent of H bonding and generally raises P sat S . The positioning of all the functional groups around the aromatic ring affect the partial charges of the atoms, via a combination of the inductive effect and the resonance effect. The inductive effect and the partial charges appear to be most important when comparing isomers and less important when one functional group has been swapped for another. In addition greater molecular weight and increased crystallographic packing density also negatively correlate with P sat S as they both lead to increased overall intermolecular interactions. However in many cases these different factors compete with each other, making it difficult to predict the expected P sat S , and currently it is not possible to determine which factor will dominate in any given case. Dipole moments were also investigated but overall showed very little impact on P sat S . Subcooled liquid vapour pressure The P sat L were obtained from the P sat S using thermochemical data obtained through use of a DSC and Eq. (3). The results are detailed in Table 7. Comparing the P sat L of the nitrophenols with the solid state values there are a few changes in the overall ordering, but they mostly have little effect upon the preceding discussion. A few previously significant increases/decreases in P sat become insignificant, and a few that were insignificant are now significant. One point of note, however, is that 3-methyl-4nitrophenol (5.86 × 10 −2 ) now exhibits a higher P sat than 3fluoro-4-nitrophenol (3.32 × 10 −2 ). This trend is what would be expected based on the reduction in steric hindrance, increased potential for H bonding and increase in the partial charge of the phenolic carbon that the F atom provides in comparison to the methyl group. For the nitrobenzaldehydes one change in the overall ordering of the P sat s is observed after converting to P sat L , but this has no effect on the preceding discussion. Finally, for the nitrobenzoic acids, whilst some previously insignificant differences in P sat S have now become significant, the only change that impacts upon the discussion is that the P sat of 3-methyl-4-nitrobenzoic acid (3.04 × 10 −1 ) is now higher than that of 4-methyl-3-nitrobenzoic acid (5.76 × 10 −2 ). This change could be explained as a result of the higher partial charge of the carboxylic carbon of 4methyl-3-nitrobenzoic acid (0.646 vs. 0.628) (Stewart, 2016) playing a more important role in the subcooled liquid state than in the solid state. Comparison with estimations from GCMs In Fig. 10 the experimentally determined P sat L values of the nitroaromatics are compared to the predicted values of several GCMs. All predicted values can be found in Table S1 in the Supplement. The average difference between the experimental P sat L and the predicted P sat L for each class of compound and overall is shown in Table 8. These GCMs are SIM-POL (Pankow and Asher, 2008), the Nannoolal et al. method (Nannoolal et al., 2008), and the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997). The Nannoolal et al. method (Nannoolal et al., 2008) and the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) are both combined methods which require a boiling point to function. As for many compounds where the experimental boiling point is unknown, boiling point group contribution methods are required. The Nannoolal et al. method (Nannoolal et al., 2004) and the Stein and Brown method (Stein and Brown, 1994) are used. The Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) shows poor agreement with the experimental data for almost all compounds but is not particularly surprising given that it only contains three nitroaromatic compounds in this method's fitting data set, with none of these compounds containing both a nitro group and another oxygen-containing group. The Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) is the oldest method examined in this study, and much of the atmospherically rel-evant P sat data have been collected after the end of the development of this model. The Myrdal and Yalkowsky method's (Myrdal and Yalkowsky, 1997) reliance on a predicted boiling point may also be a major source of error in the P sat predictions of the nitroaromatics. On average the SIMPOL method (Pankow and Asher, 2008) predicts values closest to the experimental data, on average predicting P sat L 1.3 orders of magnitude higher than the experimental values, despite absolute differences of up to 4.4 orders of magnitude. The Nannoolal et al. method (Nannoolal et al., 2004) is persistently worse than the Stein and Brown method (Stein and Brown, 1994) for the nitroaromatic compounds involved in this study as shown in Table 8. When discussing the Nannoolal et al. method (Nannoolal et al., 2008) and the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) from this point onwards they are used with the Stein and Brown method (Stein and Brown, 1994) unless stated otherwise. The Nannoolal et al. method (Nannoolal et al., 2008) has slightly better agreement with the experimental data when compared to the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997), on average predicting P sat L 2.52 orders of magnitude higher than the experimental values, whereas the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) on average predicts P sat L 2.65 orders of magnitude higher than the experimental values. The Nannoolal et al. method (Nannoolal et al., 2008), unlike the others, contains parameters for ortho, meta and para isomerism (Nannoolal et al., 2008), MY_Vp is the Myrdal and Yalkowsky vapour pressure method (Myrdal and Yalkowsky, 1997), N_Tb is the Nannoolal et al. boiling point method (Nannoolal et al., 2004), and SB_Tb is the Stein and Brown boiling point method (Stein and Brown, 1994 and even demonstrates the same trend as the experimental data for 2-nitrobenzaldehyde, 3-nitrobenzaldehyde and 4nitrobenzaldehyde, although 3 orders of magnitude higher. Despite the ortho, meta and para parameters, as soon as a third functional group is present around the aromatic ring the Nannoolal et al. method (Nannoolal et al., 2008) no longer accounts for relative positioning of the functional groups. Figure 10a shows the comparison between the experimental and predicted P sat L for the nitrophenols. Both SIM-POL (Pankow and Asher, 2008) and the Nannoolal et al. method (Nannoolal et al., 2008) contain nitrophenol data from Schwarzenbach et al. (1988). These data of Schwarzen-bach et al. (1988), however, are questionable in reliability due to being taken from a single data point from a single data set. The values given are also 3-4 orders of magnitude greater than those measured in this work as well as those measured by Bannan et al. (2017) and those measured by Dang et al. (2019). The use of the Schwarzenbach et al. (1988) nitrophenol P sat data, which make up 11 of the 12 nitrophenol data points within the fitting data set of the SIMPOL method (Pankow and Asher, 2008), is a likely cause of the SIMPOL method (Pankow and Asher, 2008) overestimating the P sat of nitrophenols by 3 to 4 orders of magnitude. The one nitrophenol used in the SIMPOL method (Pankow and Asher, P. D. Shelley et al.: Measured solid state and subcooled liquid vapour pressures of nitroaromatics 2008) not from Schwarzenbach et al. (1988), 3-nitrophenol from Ribeiro da Silva et al. (1992), has a much lower P sat than those of Schwarzenbach et al. (1988) and is only 1 order of magnitude higher than that from Bannan et al. (2017). Additionally, whilst the Nannoolal et al. (2008) method performs slightly better than the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) overall for this study, when taking the nitrophenol data in isolation this performance is flipped, with the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) showing better performance (overestimating on average by 3.4 to 3.5 orders of magnitude). Figure 10b shows the comparison between the experimental and predicted P sat L for the nitrobenzaldehydes. There are no nitrobenzaldehydes present in any fitting data set of the GCMs considered in this study. Despite this, whilst not capturing the effects of ortho, meta and para isomerism, SIM-POL (Pankow and Asher, 2008) predicts the P sat of the nitrobenzaldehydes to, on average, 0.29 orders of magnitude. As polar groups such as aldehydes have been shown to have little impact on volatility in the pure component, and by extension P sat (Bilde et al., 2015), this implies that SIMPOL (Pankow and Asher, 2008) captures the contribution of the nitro group very well. Similar to the nitrophenols the performance of the Nannoolal et al. method (Nannoolal et al., 2008) and the Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) has switched for the nitrobenzaldehydes compared to the entire data set. The Myrdal and Yalkowsky method (Myrdal and Yalkowsky, 1997) overestimates by 2.4 orders of magnitude compared to the Nannoolal et al. method (Nannoolal et al., 2008), which overestimates by 2.5 orders of magnitude. Figure 10c shows the comparison between the experimental and predicted P sat L for the nitrobenzoic acids. SIM-POL (Pankow and Asher, 2008) contains, though in limited amounts, nitrobenzoic acid data in its fitting parameters. Although there are no lists of the data used to form the Nannoolal et al. method (Nannoolal et al., 2008) available (to the authors' knowledge), it is stated that the values come from the Dortmund Data Bank, and from searches on this database there are nitrobenzoic acid P sat data available. Having even this limited number of data available for the nitrobenzoic acids allows for SIMPOL (Pankow and Asher, 2008) to predict the P sat L s of 5-chloro-2-nitrobenzoic acid, 3-nitrobenzoic acid, 2-chloro-3-nitrobenzoic acid and 2-hydroxy-5-nitrobenzoic acid to within 1 order of magnitude of the experimental values. On average the SIM-POL (Pankow and Asher, 2008) method underestimates P sat L by 0.8 orders of magnitude. The nitrobenzoic acids that had large discrepancies with SIMPOL (Pankow and Asher, 2008), 4-methyl-3-nitrobenzoic acid and 3-methyl-4nitrobenzoic acid, as well as 2-hydroxy-5-nitrobenzoic acid, agreed to within 1 order of magnitude of the Nannoolal et al. method (Nannoolal et al., 2008). On average the Nannoolal et al. method (Nannoolal et al., 2008) overestimates P sat L by 0.9 orders of magnitude. Overall SIMPOL (Pankow and Asher, 2008) performs relatively well for the nitrobenzaldehydes and the nitrobenzoic acids, and the Nannoolal et al. method (Nannoolal et al., 2008) performs moderately well for the nitrobenzoic acids when compared to the experimental values found in this study. All of the methods perform poorly when compared to the experimental nitrophenol values. These observations are not particularly surprising when taking into account how the methods were fitted and what data are present in the fitting set. One surprising observation comes when looking at the halogenated nitroaromatics. SIMPOL (Pankow and Asher, 2008) has the smallest order of magnitude difference between experimental and predicted P sat L for all of the halogenated nitroaromatics in this study. This is particularly surprising as SIMPOL (Pankow and Asher, 2008) contains no halogenated compounds in its fitting data set, whereas the other GCMs do. This implies that accurately predicting the impact on P sat L of the carbon skeleton and other functional groups such as, nitro, hydroxy, aldehyde and carboxylic acid is more important than the impact of a chloro or fluoro group. When looking at nitroaromatics as a whole, SIMPOL (Pankow and Asher, 2008) shows the smallest difference between experimental and predicted P sat L (as shown in Table 8) and would therefore be the most appropriate method to use when predicting P sat L for this group of compounds. In the case of nitrophenols, despite SIMPOL (Pankow and Asher, 2008) showing the best performance the absolute differences are still close to 3 orders of magnitude, so any work using these predictions should be aware of the very larger errors that these predictions could introduce. For nitrobenzaldehydes SIMPOL (Pankow and Asher, 2008) shows very good agreement and is the clear choice to be used when predicting P sat L . For nitrobenzoic acids the preferred method for predicting P sat L is not quite as clear. Both the Nannoolal et al. method (Nannoolal et al., 2008) and SIMPOL (Pankow and Asher, 2008) predict P sat L within an order of magnitude, with Nannoolal et al. (Nannoolal et al., 2008) generally overestimating and SIMPOL (Pankow and Asher, 2008) underestimating. Comparison with existing experimental data For the compounds in this study that had previous literature data there are differences from the values determined experimentally in this work. The differences between the values from this work and those of Dang et al. (2019) are discussed in Sect. 4.5 but can be attributed to the use of a different reference compound. For the nitrophenols, shown in Fig. 10a, the differences between the experimental values and the literature values from Schwarzenbach et al. (1988) range from 3 to 4 orders of magnitude. The relationship between the P sat L and temperature from Schwarzenbach et al. (1988) was derived from gas chromatographic (GC) retention data. This GC method requires a reference compound of known P sat , as well as for the reference compound and the compound of interest to have very similar interactions with the stationary phase of the GC. Schwarzenbach et al. (1988) used 2-nitrophenol as the reference compound for all of the other nitrophenol data they collected. In this work the P sat L at 298 K was 1.38 × 10 −3 Pa, whereas Schwarzenbach et al. (1988) reported it as 2.69 × 10 1 Pa. As the difference between the P sat of 2-nitrophenol in this work and Schwarzenbach et al. (1988) differs by approximately 4 orders of magnitude, this could explain why the other nitrophenol measurements also differ by 3-4 orders of magnitude. For the nitrobenzaldehydes, shown in Fig. 10b, the literature data from Perry et al. (1984) and the experimental data from this work agree within 1 order of magnitude, with 2-nitrobenzaldehyde especially agreeing very closely (2.39 × 10 0 Pa vs. 2.15 × 10 0 Pa). The nitrobenzoic acids are shown in Fig. 10c. The value for 3-nitrobenzoic acid from this work is 1.90×10 −3 Pa compared to 5.05 × 10 −3 from Ribeiro da Silva et al. (1999) Whilst not matching perfectly, the P sat of 3-nitrobenzoic acid is on this order of magnitude. The disagreements between the values of this work and the values from Monte et al. (2001) for 4-methyl-3-nitrobenzoic acid and 3-methyl-4-nitrobenzoic acid are quite large. 4-Methyl-3-nitrobenzoic acid differs by over 1 order of magnitude, and 3-methyl-4nitrobenzoic acid is closer to 2 orders of magnitude. The P sat values from Monte et al. (2001) were collected using a Knudsen mass loss method. Knudsen mass loss is similar to KEMS in that it also utilises a Knudsen cell which effuses the compound of interest. However for an amount of mass to be lost such that it can be detected the experiments need to be performed at higher temperatures than the KEMS. This means that the data must be extrapolated further to reach ambient temperatures. This is a potential source of error and could explain the difference. Measurement by a third or even fourth technique would be required to confirm this. Sensitivity of vapour pressure measurement techniques to reference standards The recently published paper by Dang et al. (2019) measured the P sat of several of the same compounds that are studied in this paper using the same KEMS system; however, in this study the newly defined best-practice reference sample was used (Krieger et al., 2018), whereas Dang et al. (2019) used malonic acid. These compounds were 4methyl-3-nitrophenol, 3-methyl-4-nitrophenol and 4-methyl-2-nitrophenol. The difference in reference compound led to a discrepancy in the experimental P sat (shown in Table 9). Due to these differences additional measurements were made using malonic acid as the reference material. Additionally, supporting measurements for the compounds were performed using the EDB from ETH Zurich in order to rule out instrumental problems with the KEMS. Comparisons between P sat at 298 K from the KEMS using a PEG reference, the KEMS using a malonic acid reference, Dang et al. (2019) and the EDB are shown in Table 9. Following this, P sat L values, extrapolated down to 290 K, from the KEMS using a PEG reference and the KEMS using a malonic acid reference are compared to the estimated P sat L based on the findings from the EDB using Eq. (2). Whilst the absolute values of the nitrophenols shown in Table 9 changed, the P sat trends did not. The values from Dang et al. (2019) are between 4.39 and 7.81 times lower than those in this work using the PEGs as the reference compound, which is now deemed as best practice in the community. To ensure that the difference in reference compound was the cause of the difference in P sat 4-methyl-2-nitrophenol, 4methyl-3-nitrophenol and 3-methyl-4-nitrophenol were also measured using malonic acid as a reference again. The differences between the P sat determined by Dang et al. (2019) and those in this work using malonic acid as a reference compound were between 2 % and 27 %, which is well within the quoted 40 % error margin of the KEMS (Booth et al., 2009), therefore showing that the instrument is behaving reproducibly but with now improved reference standards being used, as is discussed below. Starting with 4-methyl-3-nitrophenol the EDB has much better agreement with the KEMS when the PEGs are used as the reference compound than when malonic acid is used as the reference compound. When the quoted errors of both the EDB (shown in Table 9) and the KEMS (±40 % for P sat S and ±75 % for P sat L ; Booth et al., 2009) are taken into account, the lower limit of the EDB (1.57 × 10 −2 Pa) and the upper limit of the KEMS using the PEG references (1.51×10 −2 Pa) almost overlap, whereas the EDB data are almost 1 order of magnitude larger than the KEMS when the malonic acid reference is used (shown in Fig. 11). For 3-methyl-4-nitrophenol a comparison can be made for both P sat S and P sat L . Looking first at the P sat S the EDB appears to be somewhere in between the KEMS depending on what the KEMS is using as a reference, with its absolute value being closer to that of the malonic acid reference. However when the quoted errors are taken into account (shown in Table 9) the EDB actually has better agreement with the KEMS when the PEG references are used. This can be seen more clearly in Fig. 11. For P sat L the EDB and the KEMS when using the PEG references appears to agree very well with a large overlap when the quoted errors are taken into account. This can also be seen in Fig. 11. The confidence with which the comparison between the EDB and the KEMS can be made for 4-methyl-2-nitrophenol is lower than with the other compounds looked at due to how quickly 4-methyl-2-nitrophenol evaporated in the EDB. To make this comparison the P sat L from the KEMS measurements has been extrapolated down to 290 K to match that of the EDB estimation. The predicted EDB value (shown in Fig. 11) is higher than the KEMS for both references but has a very large error margin (approximately a factor of 5). When Table 9. Comparison between nitrophenols measured in this paper and by Dang et al. (2019). Figure 11. Comparison of P sat between the EDB and the KEMS using both PEGs and malonic acid as the reference compound (SS -solid state, SCL -subcooled liquid). this error is considered the KEMS using the PEG reference is within this range, whereas there is close to an order of magnitude difference between the lower limit of this estimate and the upper limit of the KEMS when malonic acid is used as the reference. In all cases the EDB showed better agreement with the KEMS using the PEGs as the reference material compared to when malonic acid was used as the reference material. For 4-methyl-3-nitrophenol the agreement was very close between the EDB and the KEMS using the PEGs as the reference compounds, and for 3-methyl-4-nitrophenol the measurements for the EDB and the KEMS agreed with each other within the quoted errors. For 4-methyl-2-nitrophenol the KEMS with PEG as a reference also showed the best agreement with the EDB, but as this was an estimate with a large error range this comparison is the least certain. Conclusions Experimental values for the P sat S and P sat L have been obtained using KEMS and DSC for nitrophenols, nitrobenzaldehydes and nitrobenzoic acids. The predictive models have been shown to overestimate P sat L in almost every instance by several orders of magnitude. As the P sat from these predictive techniques are often used in mechanistic partitioning models (Lee-Taylor et al., 2011;Shiraiwa et al., 2013), the overestimation of the P sat can lead to an overestimation of the fraction in gaseous state. The experimental values from this study can be used in conjunction with other measurements to improve the accuracy of GCMs and give an insight into the impact of functional group positioning which is missing, or only available in a limited capacity, for the currently available GCMs. The differences in trends of the experimental P sat have been explained chemically, with the potential and strength of H bonding appearing to be the most significant factor, where present, in determining the P sat and the stronger hydrogen bond and increasing number of possible hydrogen bonds decreasing the P sat . Whilst H bonding is typically the most important factor, it is not the only factor. Steric effects by functional groups can also have significant effects on the P sat . In the solid state crystallographic packing density can also be an important factor. To further investigate the impacts of H bonding, inductive and resonance effects, and steric effects on P sat , more compounds need to be investigated, with select compounds being chosen to probe these effects. The predictive models consistently overestimate the P sat L s by up to 6 orders of magnitude with the nitrophenols performing especially poorly. This demonstrates a need for more experimental data to be used in the fitting data sets of the GCMs to reduce the errors and give more accurate results for nitroaromatic compounds. Deviations between the measurements in Dang et al. (2019) and this work can be explained by the difference of the reference material used, which demonstrates the necessity of a consistent, widely used reference compound. The PEG series, looked at by Krieger et al. (2018), is currently the preferred reference/calibration series. Comparisons between the KEMS and the EDB from ETH were made for several nitrophenols. The EDB showed close agreement with the KEMS when the PEG series was used as the reference compounds. Compounds such as the nitrobenzaldehydes, which are capable of being H-bond acceptors but not H-bond donors, are likely to deviate negatively from Raoult's law in mixtures with compounds that can act as H-bond donors, due to the adhesive forces present. This could call into question the validity of pure component vapour pressure measurements for looking at atmospheric systems due to the atmosphere not being made up of the pure component. This would be an interesting avenue of research and the natural progression from pure component measurements to investigate their usefulness. Author contributions. PDS carried out the experiments on the KEMS and DSC. UKK carried out the experiments on the EDB. Formal analysis of the data was carried out by PDS, SDW and UKK. Project supervision was undertaken by DT, MRA and TJB. KEMS training was performed by TJB. Access to and training on the DSC was undertaken by AG. Verification on the reliability of the KEMS was carried out by UKK, with the EDB measurements being used to validate the KEMS measurements. The original draft manuscript was written by PDS, SDW and CJP. Internal review and editing was performed by TJB, DT, MRA, SDW and UKK.
2020-02-06T09:11:27.105Z
2020-01-31T00:00:00.000
{ "year": 2020, "sha1": "d686785155fd95f0eb946f4fbd99e0c14e1deaa8", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/20/8293/2020/acp-20-8293-2020.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1c6bb1e75b989495349951eb16ddf8ebd7d1e317", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
213632085
pes2o/s2orc
v3-fos-license
SLAUGHTER VALUE OF POLISH LANDRACE FATTENERS FROM FARMS IN CENTRAL-EASTERN POLAND The aim of this work is to evaluate the slaughter value of porkers from individual farms in the same producer group located in central-eastern Poland. The research was conducted on 322 fatteners of the Polish Landrace breed. The research material was classified according to two research factors: supplier and season of the year. One group of fatteners was slaughtered in the autumn (September–October) and the second in spring (April–May). The studied population of fatteners was characterized by high meatiness at an average level of 58% and average hot carcass weight of 89.99 kg. All carcasses were classified as the highest classes of the SEUROP system: 29.81% as class S, 51.86% as class E and 18.32% as class U. A statistically significant influence of supplier was found for hot carcass weight, thickness of the longissimus dorsi muscle at M1, and slaughtering efficiency. A statistically significant influence of slaughtering season on hot carcass weight and back fat thickness at points S1 and S2 was also found. Pigs slaughtered in spring were found to have a lower hot carcass weight and thinner back fat than those slaughtered in autumn. The interaction between supplier and slaughtering season was found to be statistically significant for hot carcass weight, meatiness, thickness of the longissimus dorsi at M2, and thickness of back fat measured at S1. The obtained research results indicate the high slaughter value of porkers kept in individual farms within the same producer group, and that the pork obtained from these pigs meets the requirements set by the meat industry and consumers. INTRODUCTION The quality of domestic pork raw material has been the subject of interest for both scientists and technologists working for the meat industry for over two decades (Różycki 1998;Grześkowiak 1999;Strzelecki et al. 2001;Koćwin-Podsiadła et al. 2004). This is mainly due to the preferences and requirements of consumers who have turned their attention towards from very lean meat (low intramuscular fat results in meat and meat products with high sensory qualities (Wood et al. 1994;Andersen et al. 2005;Vandendriessche 2008). Many years of work from Polish scientists, breeders and technologists has improved the production and processing of pork and resulted in a significant increase in the meat content of pig carcasses and a reduction in their fatness (Różycki 1998;Blicharski et al. 2004;Koćwin-Podsiadła et al. 2004;Lisiak and Borzuta 2008).The need for systematic improvement in meat (annually about 1%) and an increase in the slaughter value of porkers was caused by the introduction and legal sanctioning in 1993 of the objective classification of pig carcasses according to the SEUROP system and rewarding pig producers for meatiness (Dumas and Dhorne 1998;Borzuta 1999;Lisiak et al. 2005;Florek et al. 2006). In addition to meatiness, the weight of slaughtered fatteners also affects the slaughter value. The domestic meat industry prefers slaughter with a higher carcass weight, while maintaining high meatiness (55-58%). Meat from light carcasses is characterized by higher post-mortem meatiness, but it has limited processing usefulness. In Poland, pork meatiness has increased at the same time as the hot carcass weight of pigs slaughtered in meat processing plants. From 2012-2017, the meatiness in carcasses stabilized at a high level (56.5-57.7%) and so did the hot carcass weight (90-92.5 kg). This weight meets the requirements of the domestic meat industry (Lisiak et al. 2005;GUS 2017). Bearing in mind the above, there is a need for a detailed analysis of pig slaughter raw material from smaller individual farms within the same producer group. The aim of the study is to evaluate the slaughter value of fatteners, depending on the supplier and the season of the year, from individual farms located in central-eastern Poland. MATERIAL AND METHODS The research was conducted on 322 fatteners of the Polish Landrace breed. The animals came from three farms (A, B, C) located in central-eastern Poland, associated with the same producer group. The research material was grouped according to two research factors: supplier and season of the year. The first research group of fatteners was slaughtered in the autumn (from September to October), the second group in the spring (from April to May). In the experiment, the same share of gilts and hogs were taken for each supplier and season. Due to this, gender was eliminated as a factor that could have a significant impact on the slaughter value of fatteners. During the rearing period, the animals were provided with very similar living and feeding conditions. Pigs were fed with mixtures prepared from cereals from their own farm (30% triticale meal, 60% barley grits) and high protein concentrate. Animals were slaughtered using gas stagnation (carbon dioxide) in the same meat factory located 20 km from the farms. Slaughtering was done after a short rest of the animals according to typical technology used in the meat factory. After completing the procedures typical for a meat factory, the evaluation of carcasses was carried out using Ultra-Fom 300 apparatus (SFK Technology) in the following areas: percentage meat content in the carcass (meatiness), -thickness of the longissimus dorsi (LD) muscle after the last rib at a distance of 7 cm from the intersection line of carcasses cut into half-carcasses (M1), thickness of the LD muscle between the 3rd and 4th ribs counted from the end (M2), -thickness of the back fat after the last rib at a distance of 7 cm from the intersection line of carcasses cut into half-carcasses (S1), thickness of the back fat between the 3rd and 4th ribs counted from the end (S2). The hot carcass weight was also established on the weighing scales within 35 minutes after slaughter. The results were directly recorded by a computer connected to the Ultra-Fom 300 apparatus with an accuracy of 0.1 kg. The obtained results were analyzed using the statistical package STATISTICA 12.5 PL (Stat Soft, Tulusa, GK, USA). The influence of supplier (A, B, C), season (autumn, spring) and their interaction (supplier x season) on the results was estimated using a two-factor analysis of variance in a non-orthogonal system according to the following line model. The level of significance of differences between means was verified using the NIR test (Luszniewicz and Słaby 2001). RESULTS AND DISCUSSION The population of Polish Landrace fatteners analyzed in this study (322 pigs) had an average carcass meatiness level of 58.11 ± 3.01%, with a low coefficient of variation -5.18% (Table 1). The high meat content of the tested pigs was reflected in the SEUROP classification, as all carcasses were classified into the highest classifications of meatiness. 29.81% of carcasses were classified as S, 51.86% carcasses classified as E, and 18.32% carcasses as U (Fig.1). X -arithmetic mean, SD -standard deviation, V -coefficient of variation. The average meatiness noted in this research was higher (by more than 1%) compared to the average meatiness of fatteners in 2016, which was 57% (Lisiak et al. 2016). Back fat thickness measured at both S1 and S2 were characterized by high variability expressed (Table 1). In a study conducted by Antosik and Koćwin-Podsiadła (2010) on 2851 fatteners from mass populations, the average thickness of back fat was 13.00 mm at S1, 12.2 mm at S2, and average LD muscle thickness around 61 mm at M1. In turn, Zybert et al. (2015) in their analysis of 9000 fatteners from mass populations, recorded an average thickness of back fat at points S1 and S2 as respectively 15.3 mm and 15.32 mm and an average LD thickness at points M1 and M2 as respectively 58.22 mm and 57.75 mm. In summary, in our study, the thickness of the LD muscle, i.e. a feature closely related to musculature, took intermediate values between the studies by and Zybert et al. (2015). The first authors obtained a higher thickness of the LD muscle and the second a lower compared to our study. In this research, slaughtering efficiency was 78.19 ± 7.14% with a coefficient of variation of 9.13%. This is consistent with the slaughtering efficiency found in other studies, which ranges from 75-85% (Weatherup et al. 1998;Zybert et al. 2001;Koćwin-Podsiadła et al. 2004). The two-factor analysis of variance in a non-orthogonal system showed a statistically significant (at p ≤ 0.05) or highly statistically significant (at p ≤ 0.01) interaction between the first research factor (the supplier) and the hot carcass weight, LD muscle thickness measured at M1 and slaughtering efficiency. The relationship between the second research factor (season) and the hot carcass weight and back fat thickness at S1 and S2 was found to be significant at p ≤ 0.01 and p ≤ 0.05, respectively. The interaction between the two research factors (supplier and season) was found to have a significant influence on hot carcass weight, meatiness, back fat thickness and LD muscle thickness measured at point 2 (S2 and M2) ( Table 2). When analyzing each supplier separately, there was no statistically significant difference in the meatiness of the carcass from each supplier. The highest meatiness was found in carcasses from supplier B (58.32 ± 2.83%), then supplier A (57.77 ± 3.12%) and finally supplier C (57.99 ± 3.00%) ( Table 3). The SEUROP classes for carcasses from each supplier are a reflection of the above-described trend. Supplier B had the largest percentage of carcasses with the highest meatiness (classified as S) and the lowest percentage of carcasses in the U class (Fig. 2). Group A had a statistically significantly lower LD muscle thickness measured at point M1 by 2.5 mm (58.21 ± 6.59 mm compared to 60.67 ± 5.30 mm) in relation to group B. The LD muscle thickness at M1 in group C fell between the values from groups A and B (59.30 ± 5.43 mm). Hot carcass weight was found to differ significantly among suppliers. Group B had the lowest hot carcass weight of 84.03 ± 7.44 kg, then group C at 89.23 kg ± 9.70 kg followed by group A at 95.13 ± 10.31 kg (Table 3). This study noted a widespread tendency that an increase in hot carcass weight was accompanied by a decrease in meatiness and an increase in thickness of back fat. The above trend has also been found by many other researchers (Łyczyński et al. 2000;Zybert et al. 2001Zybert et al. , 2005). In the studies by Antosik and Koćwin-Podsiadła (2010) conducted on mass populations of pigs, it was shown that an increase in hot carcass weight by 10 kg (from 80 kg to 90 kg) contributed to a decrease in meatiness in the carcass by 2.8%. Zybert et al. (2001), in a study on fatteners, found that a carcass weighing over 85 kg contributed to a reduction in meatiness by 4.3%. However, for lightweight pigs (hot carcass weight up to 75 kg), no loss of meatiness was noted. Similarly, to the results quoted above, Łyczyński et al. (2000) observed that fatteners whose carcass weight was higher than 90 kg had a statistically significantly lower meatiness and higher thickness of back fat compared to those whose weight was lower than 90 kg. Examining the influence of the second research factor (season of the year) on the traits of the research material statistically confirmed differences for hot carcass weight and back fat thickness at S1 and S2. Pigs slaughtered in spring (regardless of the supplier) were characterized by a lower hot carcass weight compared to those slaughtered in autumn by about 7.5 kg (86.09 ± 8.45 kg compared to 93.58 ± 10.73 kg) and lower back fat thickness at point S1 by approx. 1.30 mm (15.23 ± 4.10 mm compared to 16.52 ± 4.29 mm) and at point S2 by approx. 1.02 mm (14.04 ± 3.85 mm compared to 15.06 ± 3.80 mm). The season did not provide a statistically significant difference in meatiness. However, a tendency was noted that meatiness was 1% higher in fatteners slaughtered in spring, i.e. fatteners about 7.5 kg lighter that fatteners slaughtered in autumn (Table 4). Explanations see Table 3. Zybert et al. (2015) in their studies on mass raw material studied the influence of slaughtering season on basic slaughter characteristics. They reported a statistically significant effect of slaughter season on hot carcass weight, meatiness, back fat thickness measured at S1 and S2, and LD muscle thickness at points M1 and M2. In their study, it was found that heavier pigs were slaughtered during the winter and spring, and the lightest ones in the summer. The authors also found the highest percentage (69.4%) of the most valuable carcasses (classes S and E) in fatteners weighing no more than 76 kg in winter. , in studies on fatteners from the mass population, found a statistically significant influence of season on hot carcass weight, meatiness, back fat thickness measured at point S2 and LD thickness at point M1. Pigs slaughtered in autumn had the highest meat content of 58.50%, the thinnest back fat at 11.55 mm and the thickest LD muscle measured at M1 (62.24 mm) compared to the remaining seasons of spring, summer and winter. In turn, hot carcass weight was uniform in the autumn and winter seasons in relation to the spring and summer seasons (winter -86.6 kg, autumn -85.65 kg, against spring -83.40 kg and summer -84.2 kg). Gardzińska et al. (2002), in studies on landrace x (duroc x pietrain) crossbred fatteners, found a significant decrease in the meatiness and a significant increase in back fat thickness of pigs whose weight on slaughter day exceeded 120 kg compared to fatteners of lower weights. CONCLUSIONS The analyzed population of Polish Landrace fatteners had high meatiness (average level of 58%), and high average hot carcass weight (about 90 kg). All analyzed carcasses were classified as the highest meat classes: S, E and U. The influence of supplier on hot carcass weight, LD thickness at M1, and slaughtering efficiency; and the influence of the season on hot carcass weight and back fat thickness at points S1 and S2 were found to be statistically significant. Pigs slaughtered in spring had a lower hot carcass weight and thinner back fat compared to those slaughtered in autumn. The interaction of supplier and season was also found to have a statistically significant influence on hot carcass weight, meatiness, back fat thickness measured at S1, and LD muscle thickness at M2. The obtained research results indicate the high slaughter value of porkers kept in individual farms within the same producer group, and the pork obtained from these pigs meets the requirements set by the meat industry and consumers.
2020-01-30T09:04:37.460Z
2020-01-07T00:00:00.000
{ "year": 2020, "sha1": "766810146d45cdb180e689e0a4841aceb0e6fa5c", "oa_license": null, "oa_url": "https://doi.org/10.21005/aapz2019.52.4.04", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "766810146d45cdb180e689e0a4841aceb0e6fa5c", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
251264107
pes2o/s2orc
v3-fos-license
Responses of Diversity and Productivity to Organo-Mineral Fertilizer Inputs in a High-Natural-Value Grassland, Transylvanian Plain, Romania Ecosystems with high natural value (HNV) have generally been maintained by agricultural practices and are increasingly important for the ecosystem services that they provide and for their socio-economic impact in the ever-changing context. Biodiversity conservation is one of the main objectives of the European Green Deal, which aims to address biodiversity loss, including the potential extinction of one million species. The aim of this research was to trace the effects of organic and mineral fertilizers on the floristic composition, but also on the number of species, of the grasslands with high biodiversity (HNV) from the Transylvanian Plain, Romania. The experiments were established in 2018 on the nemoral area and analyzed the effect of a gradient of five organic and mineral treatments. Fertilization with 10 t ha−1 manure or N50 P25K25 ensures an increase in yield and has a small influence on diversity, and it could be a potential strategy for the maintenance and sustainable use of HNV grasslands. Each fertilization treatment determined species with indicator value that are very useful in the identification and management of HNV grasslands. The dry matter biomass increases proportionally as the amounts of fertilizer applied increase and the number of species decreases. Introduction In Europe, semi-natural grasslands with extensive management are considered sources of biodiversity, some of which are even of major international importance, competing with the diversity of habitats that have world records in the number of species per unit area [1,2]. These ecosystems, with high natural value (HNV), have generally been maintained by agricultural practices, so their phytodiversity has developed over the centuries in close correlation with the type of management applied [3][4][5]. Semi-natural grasslands, mainly used for feed production, are an important component of the use categories in Europe, covering more than a third of the agricultural area [6,7]. The biodiversity and especially the floristic diversity of semi-natural grasslands have become a major concept in agronomic research. The high diversity of semi-natural grasslands brings many benefits to farmers and consumers, as well as ecosystem services. These include, in the agronomic context, the harvest, the source of nutrients through the decomposition of organic matter, the cessation of nutrient washing, pollination, soil conservation, and resistance to invasive species in the face of climate change [8,9]. Decreased biodiversity can affect the functions and services of ecosystems [10]. Moreover, in order to maintain a characteristic floristic composition, proper management is needed, with reduced quantities of fertilizers [11]. There is a large amount of research that has focused on increasing the production of grasslands and has looked at the effect of applying organo-mineral fertilizers, both in terms of productivity and floristic composition [2,12]. In terms of grassland productivity at the European level, it plays an important role as a source of fodder for both domestic and wild animals. At the same time, grasslands have a multifunctional ecological role, forming an ecosystem that is a habitat for flora and fauna [13]. The growing need for food due to the increase in population has led to the significant intensification of grassland areas, especially in the last 50 years [14]. The development of evaluation methods and the establishment of indicators to highlight the effect of management on grassland systems is a current concern [15,16]. The effect of management on the productivity and biodiversity of high-natural-value pastures (HNV) is increasingly being addressed at the European level [17]. Currently, in Romania, specialists are trying to assess the management of high-biodiversity (HNV) grasslands with the help of species with indicator value and to draw up a list of species taking into account the stationary conditions and the intensity of the management used [2,11,17,18]. An innovative feature of this research is the analysis of the reaction of semi-natural grasslands to treatment with organic and mineral fertilizers. In order to identify the ecological and agronomic value of semi-natural grasslands, both knowledge on the ecology of plant communities and knowledge on indicator plant species are needed [19,20]. Until now, research has analyzed the effect of fertilizers on the yield of dry matter (DM) and, later, the study of the floristic composition. It was stated that the crop of dry matter increases as a result of the increase in the quantities of fertilizer but did not correlate with the new types of grasslands installed, as a result of intensification, and increased productive potential was noted. Therefore, the present research first proposes an analysis of the reaction of the floristic composition (at the applied inputs) and then an analysis of the dry matter crop (DM). At the same time, the current threats to these systems are diverse and persist, despite global and European (Green Deal) policies that address these shortcomings. Climate change, abandonment, and intensification of systems are just a few of them. The sustainability of the use of natural and semi-natural grassland systems is current and widely debated, from the global to the regional level, in many areas of activity [21]. The aim of this research was to trace the effects of organic and mineral fertilizers on the floristic composition, but also on the biodiversity, of the HNV grasslands on the Transylvanian Plain. The aim of our research is influenced by the current context of the European Green Pact. The uniqueness of this study shows that, regarding the science of grassland ecosystems, the application of fertilizers is applied following economic criteria to increase productivity. Until now, research has analyzed the effect of fertilizers on biomass (D.M) and later the floristic composition, a situation that provided an incomplete information. Over time, this mode of analysis can lead to a restriction of the number of species in grassland ecosystems. In most studies it was stated that biomass increases as a result of increasing amounts of fertilizers, but does not correlate with new types of grass installed, as a result of intensification and which had an increased productive potential. Therefore, this paper first proposes a treatment of the reaction of the floristic composition (at the applied inputs) and then an analysis of the biomass. One solution that could be beneficial for grassland ecosystems would be to install short-term experiences, experiences that give us a quick forecast of the changes that occur in these grassland ecosystems. Vegetation analysis was performed both quantitatively and qualitatively. The objectives evaluated in the experiment were formulated in the form of questions: (i) With which amount of fertilizer are there major changes in the structure of the floristic composition? (ii) What is the amount of fertilizer until which the phytocenosis does not lose its biodiversity? (iii) Can plant species with indicative value for applied management be identified? (iv) Which optimal fertilizer doses can be identified so that there is a balance between productivity and biodiversity? The Influence of Mineral and Organic Fertilizers on the Floristic Composition Based on the cluster analysis, it is possible to observe the classification of the vegetation and the modification of the type of grassland due to the floristic distances between them ( Figure 1). The cut-off level of the dendrogram was established on the basis of phytosocio-logical and ecological meaning, in order to include as much information as possible [22]. Based on the analysis of the floristic composition, we considered that cutting off at the value of 50 is an optimal solution, this having the highest phytosociological, ecological, and agronomic meaning. Thus, three distinct groups were identified. The formation of groups because of the application of inputs shows that fertilizer treatments have produced major changes in the vegetation. Each amount of fertilizer applied determined a particular floristic composition. The first group is represented by V1 and V2. The second group is represented by V3, and the last group by the V4, V5, and V6 variants. In V1, we have the type of F. rupicola grassland. In variants V2, V3, and V4, there are changes inside the phytocenosis, but these are not major, because the type of grassland which remains is F. rupicola. Changes in the type of grassland occur in V5 and V6 when we have the grassland A. capillaris with F. rupicola. Axis 1 is represented as a proportion of 78.8%, and axis 2 of 11.9% (Table 1). Note: r-correlation coefficient between ordination distances and original distances in n-dimensional space; V1-natural grasslands (control); V2-10 t/ha −1 manure; V3-10 t ha −1 manure + N 50 P 25 K 25 ; V4-N 50 P 25 K 25 ; V5-N 100 P 50 K 50 ; V6-10/t ha −1 manure + N 100 P 50 K 50 . Significance: p < 0.001 ***; p < 0.01 **; p < 0.05 *; ns-not significant. Following the orders with the PCOA resulted in Figure 2. The control phytocenosis is represented by the type of F. rupicola; in variants V2, V3, and V4, 2 years after the application, there were only changes inside the phytocenosis, with no changes in the type of grassland. In fact, a significant change in the level of the vegetal cover occurs when the quantity of fertilizer increases-namely, for the application with N 100 P 50 K 50 kg (V5), respectively, and the application of the combination of mineral and organic fertilizers (V6). When applying these treatments, A. capillaris grassland with F. rupicola is installed. Following the analysis of the floristic composition for the years 2019, 2020, and 2021, we obtained the following results. Some plant species (31 plant species) correlate with axis 1 of sorting and others (21 plant species) correlate with axis 2 of sorting, which means that they are favored by the absence of fertilization or by fertilization with small quantities of fertilizer (Table 2). Among them, we mention F. rupicola (p < 0.001), Lolium perenne, Vicia cracca, Achillea millefolium, Plantago media, etc. For these plant species, the application of 10 t/ha −1 manure annually leads to an improvement in the soil nutrients, so these plant species are no longer found in the ecological optimum. Some of the plant species have their ecological optimum between the treatment with 10 t/ha −1 manure and the non-application of fertilization, such as Plantago lanceolata (p < 0.001), Onobrychis viciifolia (p < 0.001), and Bromus secalinus (p < 0.001). The application of manure (V2) has led to the extinction of some plant species and the appearance of others. In particular, the application of 10 t ha −1 manure (V2) annually determined the disappearance from the floristic composition of the following species: Agropyron intermedium, Brachypodium pinnatum, Carex humilis, Carthamus lanatus, Bupleurum falcatum, Allium angulosum, Nigella arvensis, and Scabiosa ochroleuca. Moreover, it caused the appearance of nine new plant species in the floristic composition: Dactylis glomerata, L. perenne, Trifolium pratense, Trifolium repens V. cracca, etc. With the application of 10 t/ha −1 manure + N 50 P 25 K 25 (V3), three plant species disappeared from the phytocenosis (Coronilla varia, Cerastium holosteoides, Poa angustifolia), with the appearance of ten plant species in the floristic composition, such as Festuca arundinacea, Festuca pratensis, Poa pratensis, B. secalinus, C. humilis, Medicago sativa, Salvia pratensis, and Centaurea stoebe. This analysis was performed in comparison to V1. The variant V4 with N 50 P 25 K 25 led to the extinction of three plant species (B. pinnatum, Eryngium campestre, and C. varia) and the emergence of four new species (Arrhenatherum elatius, F. arundinacea, etc.). Treatment (V5) caused the plant species Bromus inermis to appear and led to the extinction of the species O. viciifolia from the floristic composition of the grassland type. Regarding the last degree of intensification of the phytocenosis, the application of 10 tha −1 manure + N 100 P 50 K 50 (V6) determined the emergence of mesotrophic and eutrophic plant species, species that find their ecological optimum at this degree of fertilization. In the case of this treatment, the floristic composi-tion is restricted, so 10 plant species disappeared from the floristic structure: P. lanceolata, Elymus repens, Fragaria viridis, Convolvulus arvensis, etc. Effects of Fertilization on Grassland Biodiversity (Number of Species) In Figure 3, axis 1 has the greatest importance in explaining the phenomenon, namely r = 0.859, tau = 0.480 ( Figure 3). The high number of plant species is related to the type of fertilizer applied, but especially to the dose administered. In our experience, the biodiversity of the grassland has suffered, as it has decreased from 42 plant species in the phytocenosis of the control to 20 plant species in the phytocenosis where the most intensive management measures have been applied, namely in V6 (Axis 1 = 0.895, Figure 3). n semi-natural HNV grasslands determines a clear oristic composition of a grassland is a reflection of the agement applied [8,23]. Each phytocenosis, with its -experimental variant; the red line represents the regression (species number trends), and the blue line is represented by the maximum amplitude curves; left graph is representing the axis 2 and the bottom graph is representing the axis 1. In the control phytocenosis, we identified a total of 42 species. When applying the quantity of 10 t ha −1 manure, small changes were registered at the level of the floristic composition. In this phytocenosis (V2), we identified 39 plant species, which were registered in the floristic composition. Therefore, three plant species have disappeared from the control (V1) phytocenosis. When applying treatment three (V3), in the floristic composition, we identified 36 plant species. Compared to the control variant, it was observed that there was an important change in the floristic composition: six plant species had disappeared from the phytocenosis. The application of mineral fertilizers strongly influences the participation of plant species, causing the disappearance or appearance of new species, compared to the application of organic inputs. The application of N 50 P 25 K 25 (V4) led, in the floristic composition, to 35 plant species. A significant change in the floristic composition was registered when applying the quantities of N 100 P 50 K 50 (V5), the number of plant species being significant, only identifying 25 species of plants. In the case of this treatment, a loss of biodiversity can be observed in the grasslands with HNV in the study area. The increase in the fertilization quantities and the application of combined fertilization of (V6) caused a drastic decrease in the number of plant species: in the floristic composition, there are only 20 species of plants. Compared to the control variant, a loss of biodiversity can be noticed for the grasslands with HNV in the study area ( Figure 3). Species with Indicative Value for the Intensity of Applied Management One of the objectives of this research was to identify plant species with indicator value for each graduation of fertilization applied, for the type of fertilizer applied, and for organic or mineral fertilization. Table 3). The indicator species listed above provide us with valuable information on the management applied in these HNV systems. Once the phytocenosis situation has been established, appropriate practical management strategies can be developed for the future, including measures for maintenance and use. The elaboration of this list of species with indicative value (Table 3) for the degree of intensity of organic, mineral, and combined fertilization (organo-mineral) is very beneficial because, in the near future, the evaluation of the grasslands will be carried out according to the result. The list of species with indicator value developed in this paper ensures support for the beneficiaries of environmental and climate measures, in order to self-assess practices on the farm, as well as to support officials within the institutions involved in verifying compliance with commitments. The Influence of Organic Fertilizer Gradient over Agronomic Spectrum The dry matter biomass increases proportionally as the amounts of fertilizer applied increase. The amount of biomass correlates significantly (r = 0.698; Figure 4) with the applied treatments, but especially with those applied in variant 6. The productivity of F. rupicola grasslands (control) is 1.19 t ha −1 (DM), and after the application of the treatments, it increases up to 2.05 t ha −1 . In our experience, the difference in biomass between the control variant and the application of the treatment with 10 t ha −1 manure brought about significant increases in dry matter (0.25 t ha DM). Increasing the amount of fertilizer registered higher production increases, but at the same time, it decreased the biodiversity of the grasslands with HNV. Thus, the application of organic fertilizers in moderate doses of 10 t ha −1 manure registers a significant increase in biomass, but at the same time, there is a reduction in the diversity of grasslands with HNV. Consequently, the application of mineral fertilizers in doses of N 50 P 25 K 25 does not produce an imbalance in phytocenosis, registering an increase in biomass but with a minimal decrease in the number of plant species in the floristic composition (Figure 4). n semi-natural HNV grasslands determines a clear oristic composition of a grassland is a reflection of the agement applied [8,23]. Each phytocenosis, with its d by humans and, therefore, new types of grasslands both nationally and internationally, have shown that ems greatly reduces the specific richness, installing trophilic species), which offer rich biomass crops and -experimental variant; the red line represents the regression and the blue line is represented by the maximum amplitude curves; left graph is representing the axis 2 and bottom graph is representing the axis 1. Discussion The application of fertilizers on semi-natural HNV grasslands determines a clear classification of phytocenoses. The floristic composition of a grassland is a reflection of the phytocoenosis and the practical management applied [8,23]. Each phytocenosis, with its own characteristics, can be influenced by humans and, therefore, new types of grasslands appear [2]. Organized experiments, both nationally and internationally, have shown that the intensification of grassland systems greatly reduces the specific richness, installing valuable forage species (generally nitrophilic species), which offer rich biomass crops and high-quality fodder [13,17,[24][25][26]. The formation of specific groups as a result of the application of inputs demonstrates that fertilizer treatments have produced major changes in the phytocenosis of grasslands [27]. Moreover, in a study conducted in the period 2002-2003, on the indicator species in various types of grasslands in the alpine area of Austria, it was specified that the applied management was the one that ensured the classification of floristic plots according to the similarity of the floristic composition of the types of grasslands analyzed. Although the study focused on the chemical properties of the soil and the substrate of the floristic cover, the author noted a clear relationship between the intensity of grassland management and the diversity of the floristic plots [28][29][30]. In a study carried out on permanent grasslands in the southern part of Tyrol (Italy), the authors observed the positive effects of applying organic fertilizers in moderate doses on the species T. pratense, in less dry years; however, researchers have expressed concern regarding the recurrence of drought and the complexity of applying fertilizers, noting that this complexity negatively affects the floristic composition and biodiversity of permanent grasslands [31]. In our research, the species T. pratense is recommended for treatment with 10 t/ha manure −1 + N 100 P 50 K 50 (V6). It is demonstrated in the literature that the species A. capillaris increases its coverage in the vegetal cover as the fertilizer dose increases [32]. Thus, our results are also supported by specialized research, where, in a similar experience but on another type of grassland, namely Festuca rubra, the species A. capillaris was strongly influenced by the treatments applied and had the highest coverage after treatments with N 100 P 50 K 50 and N 150 P 75 K 75 , with the plant species increasing its share from 12.5% coverage (control) to 62.5% in the floristic composition after treatment with N 150 P 75 K 75 [2,4,18,33]. The richness of the number of plant species is determined by the type of fertilizer applied, but especially by the dose of fertilizer administered [34]. Numerous studies have shown a positive relationship between biodiversity and low-dose fertilization [7,35,36]. A study similar to ours found that the specific richness of a grassland included 38 plant species in the control variant and that the natural grasslands have a moderate floristic biodiversity, and by applying the treatments, the specific richness will be reduced simultaneously with the dose of manure applied, results also confirmed by our studies [37]. Another study conducted in 2007, in the context of long-term experiments in the Czech Republic, found that the application of low-dose manure has significantly contributed to improving and maintaining the number of plant species in the vegetal cover [38]. This aspect is also confirmed by our results in the variant with 10 t ha −1 manure, where there was a minor change in the floristic composition. At the same time, other authors confirmed that with the application of treatment with N 50 P 25 K 25 (V4, in the case of our study), the number of plant species decreased by only four species [39]. These results are similar to those of our research. Although the nitrogen doses are approximately equivalent to the application of the treatment with 10 t ha −1 manure (V2) compared to N 50 P 25 K 25 (V4), the changes in the floristic composition are different, a situation that is due to the stronger effect of mineral fertilizers, which cause more extensive changes and with a high degree of differentiation in the vegetation. The effect of manure on grass depends on several factors. In some research, the external factors taken into account were the weather conditions, the characteristics of the manure, the type of soil and the moisture content of the soil, and the height of the grass [40], a topic that may be worth exploring in future research. In the case of our research, the number of plant species in this variant was reduced by seven. As the fertilizer doses increase, especially with the application of treatments with N 100 P 50 K 50 and N 150 P 75 K 75 , plant biodiversity is drastically reduced [41,42]. In the case of our research on the application of V5, there was a drastic decrease in the floristic composition of grasslands with HNV, with the phytocenosis only having 25 plant species in the floristic composition. The results of many specialized studies show that the specific richness of HNV grasslands will be reduced at the same time as the applied fertilizer dose [43][44][45], and similar results are confirmed in our studies. At the same time, numerous studies have shown a positive relationship between biodiversity and low-dose fertilization [2,41,45,46]. Plant species with indicator value are those that offer valuable information for the researcher on the environmental conditions, the application of maintenance works, and the means of use, the level of anthropogenic influence, etc. [17]. For example, indicator plant species may be particularly useful for HNV grasslands, for which a clear phytodiversity assessment and appropriate practical management must be established [46][47][48]. In our experience, the application of treatments resulted in clear evidence of phytocenoses and a higher number of plant species with indicative value for control phytocenosis. The highest indicator value (100) in our experience was found for the following species: A. intermedium, A. elatius, F. arundinacea, B. inermis, B. secalinus, C. stoebe, C. lanatus, B. falcatum, A. angulosum, N. arvensis, and S. ochroleuca. However, these plant species of indicative value may be considered as bioindicators for the control, only in the participation registered in the case of control phytocenoses. Our results regarding the identification of plant species with indicator value are also confirmed by other specialized studies, such as [11,17,29,46,49,50]. Agronomic factors bring us additional information, useful in explaining the phenomena recorded in the vegetation cover. These factors are essential for establishing the agronomic value and developing appropriate maintenance and use methods for the identified phytocenoses. In this research, we aimed to identify a balance between productivity and biodiversity-in other words, the appropriate dose of input applied so that the biodiversity of the grasslands does not register major changes, and to register an important increase in fodder production for the semi-natural grasslands in our study area. In our experience, as expected, the harvest is favored by the application of organic and mineral fertilization. Dry matter biomass increases as fertilizer doses increase. Significant biomass increases were also obtained by other researchers in this field [51][52][53][54][55][56]. Following this study, the use of fertilizers in moderate doses, namely the application of a 10 t/ha −1 manure or N 50 P 25 K 25 , will provide an increase in biomass and exert a minor influence on the diversity of grasslands, being close to the traditional use of grasslands in the Transylvanian Plain (our study area). At the same time, it can be seen in our work that the application of fertilizers (organic and mineral) can cause different biomass crops within the same type of grass-in our case, the type of grass being F. rupicola. This method of analysis provides us with valuable information regarding the evaluation of grassland phytocenoses. Such aspects of grassland ecosystems are also confirmed by other researchers in the field [2,57-59]. Study Site The Transylvanian Depression is famous for its extensive grasslands of various types, most of which have traditionally been used, until now, being mowed by hand or grazed extensively [60]. This natural heritage is now facing changes in use in the form of increased use or abandonment of grasslands, all of which threaten the biodiversity of grasslands with HNV [25]. In Transylvania, there are extensive grasslands with HNV whose biodiversity is remarkable even on a global scale [1,[61][62][63] From the point of view of zoning and vegetation layers in Romania, the Transylvanian Plain is part of the nemoral area [66], with altitudes between 250 and 400 m, usually with clay soils, brown clay, alluvial, and gray. The type of grassland representative of the area is F. rupicola, often found on mesoxerophilic biotopes [4]. The productivity of F. rupicola grassland is low-medium, with the production of 3.5-6 t per ha green mass and a grazing capacity of 0.4-0.6 LU per ha [67]. Data on the meteorological situation were collected from the weather station located near our experimental field. This research presents meteorological data for 4 experimental years. The highest average was recorded in 2019 (11.4 • C; Table 4, with the lowest average in 2021. It can be seen from the table below that the closest value of the average annual temperature-the average for 60 years-occurred in 2021. Recently, we have been faced with climate change, which has major influences on grassland ecosystems [68][69][70]. Regarding the data recorded for precipitation, they are presented as follows: the amount, the average for the last 60 years, and a characterization of the precipitation obtained. It can be seen in Table 4 that the highest rainfall was observed in 2019 and 2020, when 606.0 mm was recorded for each year. According to the climatic characterization, it was found that there were two rainy years. For the year 2018, in terms of rainfall, there was 540.7 mm, with a deviation of +9.7 mm from the annual average for the last 60 years. In 2021, the lowest rainfall was recorded (530.0 mm), with −1 mm average over the last 60 years being 531 mm (Table 4). Table 5 shows the total rainfall and rainfall distribution (mm) for the four experimental years and the long-term total rainfall. Experimental Set-Up In order to achieve the objectives of this research, an experiment was initialized in 2018. Our experiment was carried out for 4 years (2018-2021). Fertilization of experimental variants was performed in each experimental year (i.e., spring 2018, 2019, 2020, and 2021). Both mineral fertilizers and manure were applied annually. The experiment was devised according to the method of randomized blocks, in four repetitions (blocks), with 6 experimental variants. The experimental plot area totaled 20 m 2 ( Figure 6). The experimental variants were as follows: V1-natural grassland (control); V2-10 t/ha −1 manure; V3-10 t/ha −1 manure + N 50 P 25 K 25 ; V4-N 50 P 25 K 25 ; V5-N 100 P 50 K 50 ; V6-10 t/ha −1 manure + N 100 P 50 K 50 ( Figure 6). The biomass was harvested from each experimental variant, with a mower (BCS 630 WS mower). The mowing height was achieved every year at 4 cm above the ground. The experiments were performed on a haplic chernozem soil type. In 2017, before the start of the experiments, a description of the soil profile was created, and physical and chemical data were collected, as presented in Table 6. The analyses were performed by the Office of Pedological and Agrochemical Studies from Cluj-Napoca. Fertilizer Inputs Used When applying the inputs (organic and mineral), the weather conditions and time intervals recommended in the correct fertilizer application guide were taken into account. The application of the inputs was carried out annually, usually in the first week of April, being considered the optimal time of application. Manure was obtained from households in the area, being a well-fermented manure that corresponded to the guidance on the correct application of fertilizers. The chemical composition of manure is: Nitrogen (N mg/L) 2058, Phosphorus (P mg/L) 515, and Potassium (K mg/L) 2058. Mineral fertilization was performed with N (nitrogen), P (phosphorus), and K (potassium), in a ratio of 16:16:16. Floristic Studies Various grassland vegetation research methods are used in the study of grassland systems. Floristic studies were conducted using phyto-population indices: presence/absence, abundance, density, coverage (dominance), abundance-dominance, and frequency [71,72]. Floristic studies were performed using the Braun-Blanquet Abundance-Dominance Assessment Scheme, using three sub-notes [2,66,73]. The floristic determinations were realized every year when the Poaceae were in the phenophase of flowering. In our research, we analyzed the floristic data from 2019, 2020, and 2021. The floristic studies were carried out annually. In the experimental area, a mixed management strategy was utilized (mowing and grazing). Statistical Methods Used PC-ORD software version 7 was used to process the floristic data obtained in the experimental field (www.pcord.com) (accessed on 15 July 2022) [74], Table 7. For processing, the data obtained were entered in the form of two matrices. In the first matrix, the data on vegetation were introduced, and in the second, the experimental variants were codified. The grouping of the surveys with the numerical analysis of the classification of the experimental data of the present research was carried out with cluster analysis (Cluster analysis), where we chose the Euclidean distance index (Pythagorean). Ordering floristic plots (PcoA) is a method of data exploration, following which hypotheses can be made about the ecological or agronomic gradients responsible for the variation in the floristic composition of different phytocenoses [66,73]. The measurement of the floristic distance was performed with the help of the similarity index Sorensen (Bray and Curtis). The Sorensen distance, measured as percent dissimilarity (PD), is a proportion coefficient measured in city-block space. Sorensen's Index is very similar to the Jaccard measure and was first used by Czekanowski in 1913 and discovered anew by Sorensen (1948). This index is least affected by large differences in the specific richness, dominance, and total abundance of the species in the areas and sample analyzed [31]. The analysis of plant species with indicator value highlights which species are responsible for differentiating groups. In our research, we performed the analysis of indicator plant species (Indicator Species Analysis-ISA) according to the method of DUFRENE and LEGENDRE [32]. This method is based on the calculation of the average abundance-dominance (AD m ) and constancy (K) of a plant species in all groups. The method combines information on the concentration of species abundance in a particular group and the faithfulness of occurrence of a species in a particular group. It produces indicator values for each species in each group. These are tested for statistical significance using a randomization technique. The method assumes that two or more a priori groups of sample units exist, and that species abundances have been recorded in each of the sample units. The product of these phytopopulational indices will be reported at 100 and will result in the indicative value of the plant species. This indicator value (INDVAL) can be between 0 (no indicator value) and 100 (perfect indicator value) [2,11,27,32]. Vegetation traits were calculated as three spectra: naturality-number of species (Spp. no.); Shannon Index (Shannon); ecologic-trophicity (N); soil reaction (R); humidity (U) and agronomicmowing (C); grazing (P); crushing (S); forage value (VF); yield (Y). For a complete and unified analysis of the three spectra, we use the term agroecological spectrum. Floristic data processing was performed with PC-ORD, version 7, which uses the multivariate analysis of botanical data. Vegetation was quantitatively analyzed with the ANOVA test. Table 7. Reformulation of the Jaccard and Sorensen indices for presence-absence data and abundance data [74]. Index Presence- Absence Based on a, b, Sorensen indices. The first step is to redefine the traditional binary counts as follows: S 1 = total number of species in sample 1; S 2 = total number of species in sample 2; S 12 = number of species present in both samples; a = S 12 ; b = S 1 − S 12 ; c = S 2 − S 12 . Conclusions The application of inputs on the grasslands of F. rupicola determines changes in the composition, which result in a change in dominance and co-dominance between plant species and the installation of new types of grasslands. Each amount of fertilizer applied, organic or mineral, determines a particular floristic composition. At the same time, fertilization strongly influences the participation of species, causing the disappearance or appearance of new plant species. A significant change in the floristic composition occurs when applying mineral fertilizers in moderate to large quantities (N 100 P 50 K 50 ). The application of fertilizers in moderate doses of 10 t ha −1 manure or N 50 P 25 K 25 does not bring about major changes in the floristic composition and does not endanger the biodiversity of grasslands with HNV, but at the same time, it causes an increase in biomass. The phytocenosis of the control had, in the floristic composition, 12 species of valuable plants. Following the application of inputs (organic and mineral) when applying 10 t ha −1 manure, we identified five species of plants with indicator value. The treatment with 10 t ha −1 manure + N 50 P 25 K 25 (T3) revealed nine plant species with indicator value. T5 with N 100 P 50 K 50 revealed eight species with indicator value and T6 (10 t ha −1 manure + N 100 P 50 K 50 ) had, in the floristic composition, a total of seven plant species with indicator value.
2022-08-03T15:03:23.082Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "4d4068e20ea3992295e86f2d6732970517f1b4ef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/11/15/1975/pdf?version=1659504379", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "057927c70e35de7b09a8dd92cc4ade39cecb5d7c", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
251965849
pes2o/s2orc
v3-fos-license
Development of Online Courseware for Low-Achieving Special Education Students Learning Communication This is an open access article, licensed under: CC–BY-SA Abstract: In 2019, the government announced the Movement Control Order due to pandemic covid19. The situation leads to the closure of every school and premises to avoid the number of cases increasing. Every teacher conducts online classes where most of the students also facing difficulties in following the session. The study aims is to propose an online courseware for a lowly-achieving student based on communication subject. This study focuses on the development of courseware for form 1 of secondary school user. Lowly functional students are referring to students who unable to master the skills of reading, writing, and counting at a minimal level. The methodology used in this research is the evolutionary prototype with the integration of the SAM model, an instructional design model. A quantitative study was made among teachers and parents. As a result, a courseware called the BrightEdu were developed to help the children agree with the theoretical conclusions and research results of other authors. Introduction The Ministry of Education (MOE) has established a Special education Program to help students with special needs in their studies. The programs that are offered are the Special Recovery Program, Inclusive Special Education Program, Integrated Special Education Program (PPKI), and Special Education Service Centre. Now, in Malaysia, a total of 93,951 special in need students were recorded in 2020 [1]. In special education, there are 3 levels of functionality of students with special education which are highly functional, moderately functional, and lowly functional. Current literature defines special education as individually pleased, systematically implemented, and carefully evaluated instruction to help exceptional children achieve the greatest possible personal self-sufficiency and success in the present and future environment [2]. In April 2019, the government of Malaysia announced to all Malaysian to stay at home due to covid 19, and all schools and almost all business premises have to be shut down. However, schools need to continue whether students couldn't go to school, which study online. A lot of students facing difficulties in studying online due to bad internet connections and special education is one of the areas of education that is directly affected by these common learning constraints [3]. The content of this research will be more focused on communication subjects. Communication is one of the core subjects for low-functioning students. The focus of this subject is to emphasize the application of communication and interpersonal skills in their daily life. Communication subjects consist of three main components, Malay language, English, and mathematics. This research is an idea of developing courseware to help teachers, low-functioning students, and parents in conducting and following the online classes. The study by Norazmi [3] was conducted to study the implementation of the teaching process and facilitation at home for students with special needs with hearing impairment. Based on the findings, with position 100%, materials or facilities are the main challenging part of online learning class for parents, 92% on family management, 72% on the skill, and 60% on knowledge [3]. The challenging part of this study shows that parents have difficulties in guiding their children in following the online classes. Children with special educational needs all have learning difficulties or disabilities that make it harder for them to learn or access education than most children of the same age [4]. The development of the courseware also found a comfortable way for the students to study at home. An inclusive and learning-friendly climate and environment can stimulate students, increasing participation in learning activities as well as unearthing the latent potential of students [5]. There are differences when teachers use supportive co-learning when there is an improvement in terms of students' social skills such as being ready to socialize with group members and being more responsible and independent in managing life [6]. Every student has their style in understanding what they are learning. For special education students, the way of learning must be different and most of these students have to be taught individually. Literature Review 2.1. Concept and Terminology "Learning experience is important as it is part of teaching and learning process for students in this age need to get more attention to increase their motivation and satisfaction in a classroom [7]." This part consists of several concepts and terminology on special education, reviewing the current issue and showing some similar prototypes use by the research. Special education is designed for each infant, toddler, preschooler, and elementary through high school student with disabilities and individuals up to the age of 21 to meet the unique learning needs [8]. Educators, parents, and students have different opinions and responses regarding the education concept. López et al [9] stated some may have the positive personal experiences of benefiting from special educational services, witnessing growth in supported students. Low-functional students are students who are unable to master the skills of reading, writing, and counting. In addition, some pupils are weak in terms of fine motor, gross motor, and cognitive who need a medium to assist all the time [10]. These constraints result in them not being able to master other job skills that have a complex level of performance level. Abrahamson et al [11] mentioned that some children inherit a genetic propensity toward lower cognitive abilities. Even so, they are still there to go to school because they want to be dedicated human beings and want to own work [12]. E-learning commonly found in primary and secondary schools to teach children language, mathematics, and typing skills while many corporate institutes use them as major means for the delivery of courses they offer [13]. Based on Kisanjara [14] mentioned that e-learning is defined as the application of computers with assistive software by both students within the class and for private study. Challenges of Low-functional Students in Facing Online Classes As soon as the movement control order started, schools conduct classes online and students starting to face difficulties in following the online classes. Some of the teaching and learning methods that are suitable to use by instructors during the MCO are video clips, pictures, student presentations, and sketches [1]. However, if normal students face difficulties accepting and implement learning at home, the greater constraints are faced by special needs and their parents as well [3]. Haris & Khairuddin [6] explores the implementation of inclusive pedagogy and examine the impact of inclusive pedagogy on the social development of students with special needs learning problems. They found that there are problems when mainstream teachers do not master the skills of handling inclusive pedagogy well. So, they used qualitative methods to study the case. From the research, some teachers shared their teaching strategies such as story-telling, cooperative, and active instead of spoonfeeding. Based on the research, the teachers said that the techniques and strategies used to make students learn actively in class. As a result, students gain self-confidence, being cooperative, actively involved in class, sharing thoughts, and being more independent. Based on the data gained by Norazmi [3], it is shown that the highest percentage of learning online challenges is because of lack f materials and facilities and also family management. Therefore, parents also need to have sufficient knowledge in formulating effective teaching strategies with the help of relevant teaching aids. The work presented by Norazmi [3] provides guidance to parents and families of hearing MBK in preparing to support the continuous learning process. Since normal pupils also having difficulties following online learning at home, it will be more difficult for MBK students and their parents to follow the online class. So, to do that, a quantitative survey was conducted among families with children categorized as hearing in Johor. This technique has been done and it guides parents and families of hearing MBK. Review on Similar Courseware Teachers need several setups for conducting the online classes. There are a lot of applications or software that could assist teachers to prepare their lessons and easy for the teacher to stay connected with students and parents. Educational systems around the world are under increasing pressure to use the new information and communication technologies to teach students the knowledge and skills they need in the 21st century [15]. Seesaw Seesaw is an application that helps teachers in conducting online classes, upload assignments, and preparing for a class activity. These application assists teachers in arranging and managing the workflow of the lesson. Teachers, parents, and students can sign up for this application. Mostly, kindergarten teachers will use the seesaw application to make online classes more fun and easy to follow. The parent may be involved in helping their children from home. Nearpod An interactive lesson app, that allows the teacher to prepare for their lessons as well. Teachers only get started with what they already have. By uploading favorite resources interactive like PowerPoint or slides. Real-time checks like audio responses allowing students to ask the teacher in real-time class. These platforms help teachers to guide parents and students in conducting the online class. Other than that, teachers also may build their simple platform for the student by using google classroom or google sites to manage online lessons. Schoology Schoology is a learning management system that gives educators the tools to personalize learning for every learner and can be accessed by teacher, parent and student. Schoology provides a platform for educators to arrange their lesson, provide assessment to student, and running a quiz. Teacher may monitor the student's performance based on the activity that the student perform. Methodology This research proposed Evolutionary Prototype for the development of the BrightEdu courseware. The methodology that was proposed is the combination of an evolutionary prototype and the SAM model. The first phase (requirement gathering) identifies and describes the gathered information. The second phase shows the quick design of the courseware where the storyboard is used in this phase.The third phase (build prototype) involves the development of the courseware based on designing, prototyping, and reviewing. Jaya et al. [16] explained "a prototype is an initial version of a software system that is used to demonstrate concepts, try out design options, and find out more about problems and its possible solutions" (pg8). The prototyping methodology is interactive and involves iteration in prototype development. There are two types of prototyping methodology; throwaway and evolutionary prototyping. This research will use an evolutionary prototype as the main methodology. Evolutionary prototype is a software development method where the developer or development team first constructs a prototype [17], user will give feedback on every phase until it meets the requirement. Figure 4. Evolutionary Prototype SAM model was built in three main phases which are preparation, iterative design, and iterative development phase. The intent of provide increased flexibility with more agile development, responsiveness, and collaborative opportunities than offered by traditional ADDIE [18]. Successive Approximation Model (SAM) is suggested as an alternative ISD model for ADDIE with its agile and iterative approaches [19]. SAM model is an iterative process that gives out the experience to test, experiment, and revise the designs. 72 SAM model is an alternative to the ADDIE model and the SAM model more to an agile approach. SAM model starts with the preparation phase that consists of information gathering and SAVVY Start. Next, the iterative design phase is where we will see the research running plus additional design. In this phase, we will start to design a prototype and will be reviewed it from time to time. The process of designing, building prototypes, and reviewing will keep repeating until it meets the specification. Thirdly, the iterative development phase. In this phase, there will be a design proof, alpha, beta, and gold. The proof of the design will be passed to the alpha version, which is the first version of the development, then evolve to the second version named beta after receiving an evaluation from the Alpha version. Lastly, after the beta version is revised and evaluated, it will pass to the gold version where the final version of the development. So, the process of this phase will the development, implementation, and evaluation. The goal of this research is to develop online courseware for lowly-achieving special education student and the development of this research have to develop within 3 months or less. Choosing an evolutionary prototype alone eventually will lead to a successful development moreover, prototypes are tangible realizations of research plans that can be easily referenced by stakeholders when trying to describe desired changes. The development of the research is for educational use. So, the syllabus, learning materials, and subjects need to be implemented in this research. Therefore, the SAM model will be implemented together with the evolutionary prototype. To explain more about the combination, the figure below will show how the iteration will work together. Figure 7 shows the main page of BrightEdu courseware. This will be the first page that the user will see as they open the courseware. In this page, user will see the "TEACHER" button and the "STUDENT" button to clarify their identity before logging into the system. Usability Test In this research, usability testing is done by giving realistic tasks to participants who represent the real survey respondent. A usability test was conducted to measure how usable the courseware was for lowachieving special education students aged between 13-15 years old. Due to pandemic covid-19, faceto-face meetings are really strict and most of the schools shut down. So this survey is done by sending out a google form with the attachment of the demo video, for them to see how the courseware works. The test was participated in by 20 respondents. The result of usability was collected based on the lists of tasks that were given to the respondent. The result is calculated based on a task that was divided into 5 segments which are functions and arrangement, user convenience, colors, fonts, ratings, and feedback. Based on the survey, only teachers and parents were involved. Assuming parents become presentative of the student. About 75% of teachers involve taking part in the survey and 25% of the parents in a total of 100% adult. Table 1 shows finding for functions section. In this section, a few questions were asked on the functionality of the courseware. Users were asked to rate the interface, asking for their feedback on the interface and the sequences. After the user, watch the demo video, about 55 percent of users liked the courseware's interface. However, still, 10 percent still don't really like the interface of this courseware. It also shows that about 11 percent and 12 percent of the user understand the sequences of the interface. Table 2 consists of 2 questions for users to ask whether they are comfortable in using the system if they will give a hand on for the system. The overall result for this section, about 13 out of 20 found that users agree that this system is easy to use and to find information such as the courseware about. Table 3 shows a guiding statement for the user to rate the suitability of the color. According to the surveys, users mostly agree on the use of color, the combination, and its suitability for the student. The color used in this courseware are turquoise and its group colors. Table 4 shows that the font used in this courseware is suitable for every screen. However, there're still about 2 out of 20 users who don't really agree with the suitability of the font. Table 5 shows the finding for the navigation section, the user asked to test if the navigation of this courseware can navigate the user to the screen that they want. In this section, it is shown that users are quite satisfied with the navigation in this courseware. Table 5. Usability Test: Interface Section -Navigation Table 6 shows the finding for the button section. Button is one of the main parts in a system, the button is the main way to get to the next page or for media to play. In this section, users are realyl satisfied with the button. About out 65% user strongly agree that the button mentioned is working, meaning that this system is working to. Table 6. Usability Test: Interface Section -Button Table 7. Usability Test: Interface Section -Use of Media Table 7 shows the finding for the use of media. In this section user were asked to rate the useness of media wether it is appropriate, suitable and proper for all range of ages. Table 7 shows that about 70% of user strongly agree that the photo used in this courseware is suitable for all range of age. Beside, about 85% user don't agree if the photo used is sensitive. Meaning, most of the photos used is appropriate. The rest of the result, mostly user are satisfied with the audio, song, and video used in this courseware. Result Strongly Agree Agree Not Agree Strongly not Agree When I clicked the button, I was brought to the right screen 12 8 0 0 "Home" button brings me to home page 7 13 0 0 "Module" button brings me to module page 7 13 0 0 "Announcement" button brings me to announcement page 7 13 0 0 When I clicked the button, I was brought to the right screen Table 8 shows the finding for the overall functions of the courseware. The overall result is satisfying although there some user whom still not really satisfy with the overall system. Table 8 shows that about 85% users agree that this system is easy to use which is actually suitable for children and for parent to guide them. About 17 over 20 user agree that this courseware is easy to use. They agreed that this courseware also can be use by the students. About 14 over 20 user also agree that this courseware make their work easier. Lastly, the most if the user, satisfied and comfortable with this courseware where this fulfill the requirement. Conclusion About 85% user found it really suitable for the low-achieving special education student. It is important to know this student's need in visual and media. It is also shown that about 89% of the user accept this research as a courseware that can assist the student. The use of media is really important to attract the student attention especially in understanding the subject.
2022-09-01T15:05:42.260Z
2022-08-26T00:00:00.000
{ "year": 2022, "sha1": "f6b81e6c2bf527cf2580a7bd508dec66d2a8417f", "oa_license": "CCBYSA", "oa_url": "https://lamintang.org/journal/index.php/jetas/article/download/379/273", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a8be4705d01275c4a3d3f51e0ad24d5a1e5ee6f5", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
17491503
pes2o/s2orc
v3-fos-license
Non-Standard Neutrino Interactions and Neutrino Oscillation Experiments In analyzing neutrino oscillation experiments it is often assumed that while new physics contributes to neutrino masses, neutrino interactions are given by the Standard Model. We develop a formalism to study new physics effects in neutrino interactions using oscillation experiments. We argue that the notion of branching ratio is not appropriate in this case. We show that a neutrino appearance experiment with sensitivity to oscillation probability $P_{ij}^{exp}$ can detect new physics in neutrino interactions if its strength $G_N$ satisfies $(G_N/G_F)^2 \sim P_{ij}^{exp}$. Using our formalism we show how current experiments on neutrino oscillation give bounds on the new interactions in various new physics scenarios. I. INTRODUCTION The goal of neutrino oscillation experiments is to probe those extensions of the Standard Model (SM) which predict non-vanishing neutrino masses. However, in the usual treatment of neutrino oscillation experiments, it is often assumed that neutrino interactions are described just by the SM [1]. While we know that this is a good approximation, often physics beyond the SM induces also new neutrino interactions. If New Physics (NP) contributes significantly to neutrino interactions, the conclusions that we draw from the experimental data can be affected. For example, even for massless neutrinos, the NP can allow for weak eigenstate muon neutrino to produce an electron in the detector, and in this case, we may erroneously conclude that oscillations have occurred. To search for NP effects in massive fermion interactions (quarks and charged leptons) the experimentally measured branching ratios are used in a straightforward way. However, in the case of neutrinos, there are two important subtleties: • Neutrino masses are unknown, and their difference may be very small. In such a case, experiments cannot observe neutrinos as mass eigenstates, and the results are sensitive to the time evolution of the flavor eigenstates. • The neutrino flavor is identified by charged current interactions. Since NP may modify them, the identification of the neutrinos cannot be done in a model independent way. The results of neutrino oscillation experiments are sensitive to the following three ingredients: The production process, the time evolution and the detection process. It is impossible to separate the NP contributions to the neutrino production or detection process: a formalism that combines all the three ingredients is necessary. II. FORMALISM For simplicity, and without loss of generality, we assume two neutrino flavors, CP conservation and that the neutrinos are highly relativistic. First, we define the bases we use. Since the mass basis is well defined, we express all neutrino states as superpositions of mass eigenstates. We always work in the mass basis for the charged leptons. Then, the weak interactions define the weak basis where the neutrino weak eigenstates are SU(2) partners of the charged leptons. We start with a specific example and generalize it later. We consider a muon neutrino beam produced by π → µν decay, and the subsequent detection of electron neutrinos through νn → ep. The weak eigenstate |ν W µ , is given by a superposition of mass eigenstates |ν m α as so that |U W µα | 2 ∝ | ν m α , µ|H W |π | 2 where H W is the weak interaction Hamiltonian. In the presence of NP there might be extra carriers of the charged current interaction besides the W boson. Therefore, the neutrino produced by π → µν may be different from |ν W µ . We define this neutrino as a source basis eigenstate |ν s µ , given by a different superposition of mass eigenstates so that |U s µα | 2 ∝ | ν m α , µ|H|π | 2 , H = H W + H N P where H N P is the NP interaction Hamiltonian. Similarly, we define the neutrino detected by νn → ep as a detector basis eigenstate |ν d e , given by another superposition of mass eigenstates so that |U d eα | 2 ∝ | ν m α , n|H|e, p | 2 . In general, for any neutrino oscillation experiment it is useful to use the following bases: • The mass basis, {|ν m α }, where the neutrino mass matrix is diagonal. • The weak basis, |ν W α , where the leptonic couplings of the W are diagonal. • The source basis, {|ν s α }, where the interaction of the production process is diagonal. • The detector basis, |ν d α , where the interaction of the detection process is diagonal. When neutrino interactions are fully described by the SM, the last three definitions coincide, and this basis is usually called the interaction or the flavor basis. However, the main lesson from the above discussion is that in the presence of NP those three bases can be different. The source and the detector bases are related to the mass basis through the unitarity transformations with ℓ = e, µ, τ . The amplitude for finding a ν d n in the original ν s ℓ beam at time t is where in the last step we have used the orthogonality of the mass eigenstates. The probability of finding a ν d n in the original ν s ℓ beam at time t is For two neutrino flavors and with CP conservation we have U = cos θ ms − sin θ ms sin θ ms cos θ ms , V = cos θ md − sin θ md sin θ md cos θ md , with |θ ms |, |θ md | ≤ π/4. Define Using E α − E β ≈ (m 2 α − m 2 β )/2E, we get our main result: Few points are in order: 1. When neutrino interactions are described by the SM, θ md = θ ms = θ, and Eq. (9) reduces to the known result, P eµ (x) = sin 2 2θ sin 2 x [1]. 2. NP that affects the production and the detection processes in the same way cannot be detected in appearance experiments. In those cases the flavor eigenstates are the same for all processes, even if they may differ from the weak eigenstates. Then we have, θ md = θ ms and Eq.(9) reduces again to the standard form, so that we cannot distinguish this situation from the SM interaction case. 3. Experiments are working with neutrino beams, not necessarily monoenergetic. P exp eµ is defined to be the total appearance probability of a specific experiment, where the dependence on the energy spectrum of the beam and on the baseline length L, is included. If neutrinos are produced by several decays of different initial states, then where a i is the relative weight of the i'th decay mode in the neutrino beam, andP exp eµ (i) is the appearance probability had only the i'th decay mode been responsible for the neutrino production. 4. In two limits, ∆m 2 ≫ E/L (x → ∞) and ∆m 2 = 0 (x = 0), P eµ is x independent. When x is large, sin 2 x averages to 1 2 , then Eq.(9) gives P eµ = sin 2 θ sd + 1 2 sin 2θ ms sin 2θ md . For massless (or degenerate) neutrinos, x = 0 and the appearance probability becomes We learn that a signal can be seen in appearance experiments even for massless neutrinos. This is the case when θ sd = 0, namely, when the interaction in the production process is different from the interaction of the detection process. This signal is constant in distance, and does not have an oscillation pattern. We conclude: a distance-independent signal is not enough to prove that neutrinos are massive. Only an oscillation pattern provides a proof. Experimentally, we know that neutrino interactions are dominantly those of the SM. Therefore, while θ ms and θ md may be large, their difference has to be small. It is therefore reasonable to work in the weak basis and treat the NP as a perturbation. In the two generation case we define two small angles, θ W s and θ W d , that parameterize the deviation from the weak basis, and a third angle (not necessarily small) θ W m , that rotates from the weak to the mass basis. Using θ ab + θ bc = θ ac and θ ab = −θ ba , we get the relations between these angles and those defined in (7): To find the rotation angles we use the previously mentioned example, but the final result is general. We consider a muon neutrino beam produced by π → µν decay, while the subsequent electron neutrinos are detected through νn → ep. For SM interactions the produced neutrinos are ν W µ , and the detected ones are ν W e . We are interested in NP that gives effective couplings of the form G s Nū dμν W e and G d Then, the produced and detected neutrinos are superpositions of weak eigenstates, ν s It is useful to express the appearance probability in terms of the rotation angle from the weak to the mass basis, θ W m , and the NP strength, G s N and G d N . For x = 0 we get from (12) Experimentally we know that in the x → ∞ limit all the relevant angles are small. Then we get from (11) From the above formulae we can obtain an order of magnitude estimate of the strength of the experimentally relevant NP. An experiment with sensitivity to oscillation probability P exp eµ can probe NP in neutrino interactions when its effective strength III. BRANCHING RATIO Measurements of Branching Ratios (BR) are widely used in searching for NP effects. The meaning of BR is unambiguous when discussing quarks and charged leptons, for which a BR measures the transition rate between mass eigenstates. The main advantage of using BR is that only the production process is relevant to the calculation, and one need not worry how the decay products are detected. Therefore, measuring a BR has to be independent of the experimental setup and of the theoretical model under study. Since experiments cannot detect neutrinos as mass eigenstates, for neutrinos these requirements are not satisfied: the time evolution of the flavor eigenstates, and the NP effects in the detection process cannot, in general, be separated from the analysis of the experimental results. Therefore, the extension of the BR notion to the neutrino case is problematic. We see three major disadvantages in using BR for neutrinos: 1. Using BR calculations we can probe only part of the parameter space. In the two generation case three parameters describe the results of the neutrino oscillation experiments (see Eq. (9)): ∆m 2 , θ ms and θ md . The BR calculation is sensitive only to one parameter, the mixing angle that rotate from the source basis into the basis we are interested in. For example, the BR into final mass eigenstate is given by BR(π → µν m 1 ) = sin 2 θ ms . 2. In order to compare BR calculations with experiments, the dependence of the experimental results on ∆m 2 and θ md has to be removed. However, this cannot be done in a model independent way since each kind of NP may contribute differently to neutrino masses and to the detection process. Therefore, experimental measurements of BR cannot be presented in a model independent way. 3. Finally, there is a problem of definition. Are the theoretical calculations and the experimental bounds on rare decays as BR(π → µν e ) [2,3], BR(K → µν e ) [4] and BR(µ → eν e ν µ ) [5,6,3] related to the same quantities and therefore directly comparable? Calculations were done for neutrinos in the weak basis [7], and in the mass basis [8,9]. Experimental results are presented as bounds on the relative appearance probability,P exp eµ (i) = P exp eµ /a i , where a i is the relative weight of the relevant decay mode in the neutrino beam. In the x → 0 limit they correspond to bounds on the BR for neutrinos in the detector basis, e.g. BR(π → µν d e ). In the x → ∞ limit, all the relevant angles are small, and ν m 1 (ν m 2 ) couples mainly to the electron (muon). Then, to first order in small angles, electrons are detected when ν m 1 is produced at the source, or when ν m 2 creates an electron at the detector. Therefore, the experimental results bound the sum of the rare BR in the production process and the ratio of the cross sections in the detecting process, e.g. BR exp (π → µν e ) = BR(π → µν m 1 ) + σ(ν m 2 n → ep)/σ(ν m 1 n → ep). We see that the calculations cannot be directly compared with the published experimental bounds. We conclude: The notion of BR can be used to probe only part of the parameter space. The BR cannot be measured in a model independent way, and comparisons of experimental and theoretical results have to be done always very carefully, paying attention to check that the definitions are consistent. Therefore, the notion of BR is not appropriate when searching for NP in neutrino oscillation experiments, and the formalism that we have developed here is preferable. IV. EXAMPLES We give now three examples of NP scenarios with non-standard neutrino interactions that can be probed by current and near future neutrino oscillation experiments. In each case we first briefly present the model and the new neutrino interaction, then we discuss the actual experiment for probing the new interaction. Then, we find the neutrino sector independent experimental constraints on the strength on the new interaction, and show how results of neutrino oscillation experiments can put stronger bounds on it in part of the neutrino sector parameter space. For simplicity, we do not specify the NP responsible for neutrino masses. In the first example we consider the Minimal Supersymmetric SM (MSSM) without R-Parity [10,11]. We consider a very simple case where the MSSM superpotential is extended with only one extra term, are lepton doublet (singlet) supermultiplets. Via the exchange of the singlet charged scalar,τ R =Ẽ 3 R , such a term gives rise to the effective four fermion interaction (in the weak basis) [11] where P L = (1 − γ 5 )/2 and G N ∼ |λ 123 | 2 /m 2 τ R . (Recall: in the SM L SM = L(ν µ ↔ ν e , G N → G F ).) Let us now consider the KARMEN experiment [3]. Muon anti-neutrinos are produced in muon decay, µ + → e + ν s eν s µ , and electron anti-neutrinos are searched through inverse beta decay,ν d e p → e + n. Sinceτ R couples only to leptons, the detector basis coincides with the weak basis, but the source basis is different. Muon decay is mediated by W orτ R exchange. In the weak basis, the W diagram producesν W µ , while theτ R diagram producesν W e . The strongest neutrino sector independent bound on G N is obtained from tests of universality in µ and β decays [12,11]. From the lower bound [13] We like to show how the recent 90% CL bound from KARMEN [3] can be used to set stronger bounds on G N in part of the neutrino sector parameter space. We study two limiting cases. For massless neutrinos, from Eq. (14) we get the bound which is stronger than the bound (18). In the large ∆m 2 limit, from Eq.(15) we get the combine bound In the second example we consider the minimal Left Right Symmetric (LRS) model [14]. In this model there is a Higgs triplet, ∆ L , with the Yukawa couplings to leptons (in the weak basis), L = f ij L T i Ciτ 2 ∆ L P L L j , where L i are the lepton doublets and C is the charge conjugation matrix. ∆ + L exchange leads to the effective four fermion interaction in Eq.(17) but with G N ∼ |f 11 f 22 |/m 2 ∆ + L . We again consider the KARMEN experiment. Since ∆ L couples only to leptons, the detector basis coincides with the weak basis, but the source basis is different. Muon decay is mediated by W or ∆ + L exchange. In the weak basis, the W diagram producesν W µ , while the ∆ + L diagram producesν W e . The strongest neutrino sector independent bound on G N is obtained from tests of universality in µ and β decays and is given in Eq. (18). Therefore, the bounds (20) and (21) also hold for the effective interaction arising from ∆ + L exchange in the minimal LRS model. In the third example we consider models with light leptoquarks (LQ) [15]. There are several types of LQ that can lead to sufficiently large new neutrino interaction. We concentrate on the (I 3 ) Y = (0) 1/3 scalar LQ, S, which couples to fermions (in the weak basis), L = λ ijQ c j iτ 2 P L L i S, where Q i (L i ) are the quark (lepton) doublets. S exchange leads to the effective four fermion interaction (in the weak basis) with G N ∼ |λ 21 λ 31 |/m 2 LQ . We assume λ 32 ≪ λ 31 . Therefore, LQ interactions involving strange quarks are negligible. Let us consider the CHORUS, NOMAD and E803 experiments [16]. Muon neutrinos are produced in pion and kaon decays, π → µν and K → µν, and tau neutrinos are searched through νn → τ p. The new interaction (22) contributes the same to pion decay and to the detection process. Therefore, the neutrino produced in pion decay is ν d µ . This illustrates the above mentioned result: had the neutrino beam been produced only from pion decay, we could not probe LQ exchange in those experiments. However, the new interactions (22) do not contribute to kaon decay and the neutrino produced in K → µν is a weak eigenstate muon neutrino, ν W µ . The strongest neutrino sector independent bound on G N is obtained from the bound on BR(τ → π 0 µ) [7] Using [13] BR(τ → π 0 µ) < 4.4 × 10 −5 and BR(τ → π − ν) ≈ 0.117 we get We like to show how the expected sensitivity of CHORUS and NOMAD, P exp µτ ∼ 10 −4 and E803, P exp µτ ∼ 10 −5 [16] can be used to probe LQ exchange in part of the neutrino sector parameter space. Since we concentrate on LQ that can be tested only by neutrinos from kaon decay, we have to use the relative appearance probability of neutrinos from kaon decay, where we expectP exp µτ (K → µν) = few × P exp µτ . We learn that LQ exchange can be tested, probably at CHORUS and NOMAD, and certainly by E803. For example, assuming that the boundP exp µτ (K → µν) < ∼ 5 × 10 −5 will be achieved by E803. For massless neutrinos, Eq. (14) gives which is stronger than the bound (24). In the large ∆m 2 limit, Eq. (15) gives V. SUMMARY There are two important ways in which physics beyond the SM can affect the neutrino sector: It may give non-vanishing neutrino masses, and it may modify neutrino interactions. In this paper we showed how neutrino oscillation experiments probe both effects, and how they can be distinguished in some cases. A distance-independent signal can arise from both effects. However, an oscillation pattern can arise only when neutrinos are massive. We study the condition on the relative strength of the non-standard neutrino interactions in order that it can be tested (see Eq. (16)). Thus, current experiments aimed to reach a sensitivity of the order P exp ij ≈ 10 −4 [16] can typically probe new neutrino interactions arising from physics at the 1 TeV scale. There are several well motivated NP scenarios that introduce non-diagonal couplings that can be probed in this way. Higgs triplet exchange in Left-Right Symmetric models [14], Light leptoquarks exchange in various models [15] and super-particles exchange in Supersymmetric models without R-parity [10,11]. Our results are of particular interest in light of the growing evidences for physics beyond the SM in the neutrino sector, in particular, the solar neutrino problem [1,17], the atmospheric neutrino deficit [18] and the recent LSND result [19]. Those results cannot be simultaneously accounted for in a simple three generation model, but only in models with more parameters. Usually, it is suggested that those results are hints to models with an extended neutrino sector [20]. Alternatively, it might be that they signal NP in neutrino interactions. Such NP can be significant for the MSW solution of the solar neutrino problem [12,21], for atmospheric neutrinos [22] and, as we have discussed, for accelerator experiments. A comprehensive analysis of all these experiments, including possible NP, is needed. ACKNOWLEDGMENTS I thank Enrico Nardi and Yossi Nir for many discussions and comments on the manuscript, and Shmuel Nussinov and Nathan Weiss for helpful conversations.
2014-10-01T00:00:00.000Z
1995-07-19T00:00:00.000
{ "year": 1995, "sha1": "25197d7c8f21a84a89d669fad49103903eab949c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9507344", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c9bb4ae1cae4183adc539bff9a1e64e1f0c81fe5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250156285
pes2o/s2orc
v3-fos-license
Fine-Humor Producing Materia Medica in Persian Medicine Chitosan Nanoparticles Containing Cinnamomum verum J.Presl Essential Oil and Cinnamaldehyde: Preparation, Characterization and Anticancer Effects against Melanoma and Breast Cancer Cells According to Persian Medicine (PM), humors that can replace the consumed body compounds, while contributing to health maintenance, is called ‘ fine humor ’ ( khelt-e saleh ). However, a limited number of foods and beverages have been mentioned as the producers of fine humor. These substances are particularly important in maintaining health in vulnerable populations including pregnant women, lactating mothers, the elderly, infants and children. They also play an important role in certain treatment plans during illness and injury and after recovery. The present study was designed to investigate properties of fine-humor producing materia medica, as described by PM resources. Based on the search performed in PM textbooks, 63 substances were found to have this property. The most frequent Mizaj types were hot-wet (33.34%), hot-dry (19.05%), and cold-wet (17.47%). The highest organ tropism belonged to kidneys and bladder, brain, liver, sex organs, stomach and lungs respectively. Examining drug actions indicated obesogenous (53.97%), enhancing sperm production and sexual potency (42.86%), laxative (39.69%), and tonic (33.34%) actions to be the most prevalent effects of these substances in the body. By integrating these substances into diets, health promotion for children, the elderly, and mothers during nursing and pregnancy may be achieved. Additionally, patients can benefit from a fine-humor producing nutrition both for 1) prevention of chronic diseases and 2) during disease recovery, acute phases of illness, anemia, and metabolic illnesses. Further studies are recom-mended to analyze the components and nutritional value, and the use of PM capability in culinary medicine. Abstract Cancer is the second leading cause of death worldwide, and due to the emergence of resistance to synthetic drugs in different cancers, developing new green drugs have be-come crucial. In this study, chitosan nanoparticles containing Cinnamomum verum J.Presl essential oil and cinnamaldehyde (major ingredient) were first prepared. The obtained nanoparticles were then characterized using Dynamic Light Scattering (DLS), Transmis-sion electron microscopy (TEM), and Attenuated Total Reflection-Fourier Transform In-fraRed (ATR-FTIR). After that, anticancer effects of the as-prepared nanoparticles were investigated. IC 50 values of chitosan nanoparticles containing the essential oil were observed at 79 and 112 µg/mL against A-375 and MDA-MB-468 cells, respectively. These values for chitosan nanoparticles containing cinnamaldehyde were obtained at 135 and 166 µg/mL. The results of the current study indicated that chitosan nanoparticles containing C . verum essential oil can inhibit the growth of human melanoma (A-375) and breast cancer (MDA-MB-468) cells. Introduction Persian Medicine (PM) pays special attention to maintaining human health and preventing diseases. Persian scholars considered six principles in maintaining good health and promoting well-being; air, food and drink, movement and stillness, emotional and mental states, sleep and wakefulness, evacuation and retention [1]. PM takes a holistic and comprehensive approach to the concept of health. This indicates that an individual's inherent qualities and lifestyle including diet have a strong influence on preserving health, or preventing diseases [2]. The emphasis of PM on proper nutrition goes to the extent that Rhazes, a well-known Persian philosopher and physician in the third century, had dedicated special care to the priority of food therapy in the treatment of patients, and had advised it to all Fine-humor producing materia medica in persian medicine Y. Zeinalpour Fattahi et al. physicians in his illustrious quote, "Do not use medicine until you can treat patients with food" [3]. Based on the perspective of PM, all body organs are made up of four kinds of substances known as humors: phlegm (balgham), blood (dam), yellow bile (safra), and black bile (sauda). The quantity and quality of these humors must be balanced in order for the human body to be healthy. In his book, Human Nature, Hippocrates states, "The human body is made of blood, phlegm, yellow bile, and black bile, and these are the four natures of the human body." These four humors are causes of health and disease. As a result, the ideal condition in a human being is a result of having balanced humors. The ratio of humors in the body are not constant, and vary depending on consumed food [4]. Disease develops when an imbalance develops in one or more of these humors [5]. However, a notable point mentioned in PM resources, is that illnesses can be a result of a disturbance in either the quality (abnormal humors) or quantity (excess or insufficiency) of humors [1,5]. According to Avicenna's Canon of Medicine, humors that can replace the consumed body compounds, while contributing in health maintenance, are called fine humors. These are called under various names including Khelt-e Saleh, Khelt-e Mahmoud, Khelt-e Jayyed [6]. "Saleh al-Kimous" substances (as the material cause of fine humors), produce blood with humors in appropriate proportions. For a fine humor to be produced, the type of food substance used by humans constitutes the most important factor [7]. The main functions of fine humors include disease prevention and general health maintenance, especially for vulnerable groups (pregnant and lactating women, children, etc.). Another important clinical application is treatment and rehabilitation plants for various acute and chronic diseases [8,9]. The names and functions of substances that are sources of fine humors are scattered in PM resources [10,11,12,13,14,15,16]. This article intends to systematically identify single and compound medicinal substances and foodstuff that create fine humors according to reference textbooks of PM, and also introduce them for use in the diet of children, pregnant and lactating women, and the elderly. Methods A systematic review approach was used in the following study, with keywords including "Khelt-e Saleh", Our search also included Makhzan al-Advieh, an encyclopedia of materia medica comprising 1741 monographs. To define terminology, Bahr al-Jawahir and other dictionaries were also used. Furthermore, the terms "nutrition", along with "Persian Medicine", and "Iranian Traditional Medicine" were queried in PubMed and Scopus databases to examine the perspective of modern literature on the subject. There were no articles that particularly addressed the above topics. The frequency of Mizaj (temperament) and organ tropism, as well as the effect of each of these materia medica on body organs, were investigated subsequently. Additionally, Iranian traditional medicine General Ontology (IrGO) [17] was used to annotate drugs in UnaProd database with IrGO entities and create a document-term matrix (DTM). DTMs are matrices that present the frequency or absence/presence of terms that occur in a collection of documents. In our case, drugs made up the rows and extracted terms constituted the columns of the matrix. The matrix was binary, where zero indicated the absence of the term for a given drug while 1 indicated its presence. An analysis that can be performed on a DTM is the co-occurrence analysis between terms. The co-occurrence of two terms with each other is an indicator of semantic similarity and can be positive, negative or random. A co-occurrence analysis makes it possible to discover the relations between terms directly from the thematic content [18]. However, it should be pointed out that co-occurrence alone is not always indicative of relations between terms, making assessment of its statistical significance necessary. Dice index and Log-likelihood are good methods for determining the relations between terms in text and binary data [19,20]. The latter was used to examine the results of this study. Results Our search in PM literature yielded 63 materia medica as producers of fine humor. Mizaj, actions and organ tropism of each were retrieved. In the quality analysis of these substances, 61.91% had a hot quality, 53.97% percent had wetness, 30.16% percent were dry, and 28.58% percent cold. In terms of Mizaj, 33.34% of the retrieved materia medica had a hot-wet Mizaj, 19.05% were hot-dry, 17.47% were cold-wet, 14.29% had a balanced Mizaj, 11.12% were cold-dry, and finally 3.18% were multipotency (having more than one Mizaj). These data have been illustrated in figure 1 and table 2. In the analysis of the relationship between the substances that produce fine humor and organs, the highest tropism belongs to kidney and bladder, brain, liver, sex organs, stomach and lungs respectively (Table 3 and Figure 2). Table 3. Fine-humor producers in terms of actions Fine-humor producing materia medica in persian medicine Y. Zeinalpour Fattahi et al. A B The following conclusions were reached after examining the drug actions and determining the mechanism of their activities according to PM: obesogenous (53.97%), enhancing sperm production and sexual potency (42.86%), laxative (39.69%), and tonic (33.34%) actions were the most prevalent effects of these substances in the body (Figure 3 and Table 4). Results of the co-occurrence analysis between fine-humor producing materia medica and organ tropism/action is illustrated in figure 4. Regarding organ tropism, the most significant co-occurrence was seen with kidneys, the liver and peri-renal fat respectively. In terms of actions, fine-humor producing materia medica are most positively related to softening, obesogenous, and organ generating properties. Discussion Based on PM references, 63 cases of materia medica were classified as producers of fine humor. The most common quality in these substances was hotness. This finding is based on the most common Mizaj in the overall frequency of hot quality studied in monographs of the Makhzan al-Advieh [21]. Among the Mizaj of the retrieved producers of fine humor, hot and wet Mizaj had the highest frequency and hot-dry ranked second. According to PM principles, qualities of hotness and wetness are necessities and main factors of growth [1]. This accordance is observed in the producers of fine humor. The tendency of fine humor producers to head organs such as the liver (source of natural spirit), brain (source of psychic spirit), and the liver-kidney pathway as a route of humor production waste excretion (the kidney serves the liver according to PM) demonstrates the role of chief organs and the interaction of the health of the major organs in the process of producing body humors and healthy nutrition. The foods listed in table 4 are aphrodisiacs due to tropism to the kidneys and sexual organs. They can be considered in dietary plans of patients seeking pregnancy. Mosammen (obesogenous) is a PM terminology for substances with the action of facilitating optimal completion of the four phases of digestion to produce fine humor, that in turn helps the body to grow and gain weight. However, it was discovered in our research that not all substances with Mosammen (obesogenous) action necessarily produce a large quantity of humors (like pomegranate). It appears that these kinds of food stimulate the formation of fine humor, resulting in weight gain and growth, by empowering the natural spirit of the liver [11]. In PM resources, the aphrodisiac action is mainly observed in fine-humor producing foods rather than medicine; nonetheless, meals that enhance sexual power are mainly hot in quality. According to PM, 'tonics' are substances that improve the body's general movement or powers at the cellular level, such as digestive faculty (Ghove-e-hazemeh), absorptive faculty (Ghove-e-jazebeh) and Mental faculty (Ghove-e-mofakereh), or protect tissues from porosity and acceptance of waste products. The body is strengthened by all three systems by the normal humor producers. Fine-humor producers are valuable substances that can be advised individually or as part of recipes and formulations in health, in proportion to temperament growth and development, regulation and strengthening of the performance of chief organs (Aza-e-raeiseh) and humor-producing organs, and general/physical strengthening. By integrating these substances into diets, health promotion for children, the elderly, and mothers during nursing and pregnancy may be achieved. Additionally, patients can benefit from a fine-humor producing nutrition both for 1) prevention of chronic diseases and 2) during disease recovery, acute phases of illness, anemia, and metabolic illnesses. The proposal to study and analyze the components and nutritional value and the use of PM capability in culinary medicine will open new exploratory horizons for future studies. Conflict of Interests None.
2022-07-01T15:10:17.299Z
2022-06-29T00:00:00.000
{ "year": 2022, "sha1": "a84ab37dfaff53cadf6b41a0ae3cadc8c52ee172", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/tim.v7i2.9927", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8d37008382403b91eb29ed5885ff9268b7bf4d88", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
54932824
pes2o/s2orc
v3-fos-license
Risk factors associated with female sexual dysfunction among married women in Upper Egypt; a cross sectional study Female sexual dysfunction (FSD) is a public health concern with many physical and psychological consequences that do undermine women's quality of life. The woman is considered sexually functioning when she has the ability to achieve sexual desire, arousal, lubrication, orgasm and satisfaction. 1-4 Although FSD affects up to half of the women and can lead to physical, psychological and social problems, many inconsistent findings have suggested overlapping physical, social and relationship factors as risk for FSD. 5-7 INTRODUCTION Female sexual dysfunction (FSD) is a public health concern with many physical and psychological consequences that do undermine women's quality of life. The woman is considered sexually functioning when she has the ability to achieve sexual desire, arousal, lubrication, orgasm and satisfaction. [1][2][3][4] Although FSD affects up to half of the women and can lead to physical, psychological and social problems, many inconsistent findings have suggested overlapping physical, social and relationship factors as risk for FSD. [5][6][7] Besides, the religious and cultural background of the women are thought to play pivotal role in determining women's sexual functioning, which make some risk factors for developing FSD in a certain population different from those in another population. [5][6][7] In conservative communities, such as that of Beni-Suef, speaking out about sexual problems is considered a taboo and people may get stigmatized for their sexual disorders, which may hinder forming an overwhelming view over sexual problems, especially in females. 5 This could explain the scarcity of the available data over the state of FSD and its determinants amongst women residing Upper Egypt. In this regards, the objective of this study is to detect the risk factors associating with FSD among premenopausal married women in Beni-Suef, Egypt. METHODS This cross-sectional study was conducted in Beni-Suef City in the period between May and July 2017. According to 2016 estimates, 2.9 million people live in Beni-Suef Governorate. Beni-Suef City is the capital of Beni-Suef Governorate, nearly 120 Km south to Cairo and formed of an urban Metropolitan surrounded by rural villages on the sides of the highways linking Beni-Suef to three Governorates; Cairo, Minia and Fayoum. Almost 60-70% of the population in Beni-Suef City live in urban areas. The sample size was calculated using Epi-Info version 7 Stat Calc, [Center for Disease Control (CDC), WHO], based on the following criteria; confidence level of 95%, margin of error of 5% and non-response rate of 30%. The eligibility criteria included women older than 18 years who were married for at least one year previous to the interview date. Pregnant women, women with a history of menstrual disorders in the past year, women subjected to genital operations and women at menopause were not allowed to participate in the study. Before beginning the fieldwork, the urban Metropolitan of Beni-Suef city was classified according to the socioeconomic standards of its quarters to low, middle, and high socioeconomic strata. Out of each stratum (low, middle, and high), only one quarter was selected randomly by card withdrawal and from each quarter two streets were chosen using random start. On the other hand, the rural villages surrounding the Metropolitan were stratified according to their geographic location (North, West, and South), and only one village was selected randomly using card withdrawal from each location. The selected villages were then clustered roughly to two areas (East and West to the main water channel running throughout the village), where women living in these areas were invited to participate in the study. An Arabic language interview questionnaire was designed for data collection. The questionnaire included two sections: section I included questions about the socio-demographic characteristics of the participants, their age at menarche, marriage duration, number of pregnancies and exposure to circumcision. Weight and height of women were measured using calibrated scales and the BMI was calculated. Section II evaluated the FSD of the participants using the Arabic version of the Female Sexual Function Index (ArFSFI). ArFSFI is a 19-item questionnaire, and evaluates FSD only during the 4 weeks before the study. It includes 6 domains; desire, arousal, lubrication, orgasm, satisfaction and pain. The questions in each domain have five to six choices with a score ranging between zero and five. Women with lower scores are considered to have higher probability of having FSD. 8,9 A total of 1000 women were invited to participate in this study; of them 682 were interviewed giving a response rate of 68.2%, then 192 questionnaires were excluded (122 for not meeting the eligibility criteria and 70 because of the incompleteness), leaving the analysis population 490 women. The sensitivity of the issue and the conservative nature of the community could explain this relatively low response rate. After getting institutional approvals, The Faculty of Medicine, Beni-Suef University Research Ethics Committee has approved the study protocol. The subjects were informed of the purpose of the study and its consequences with confirming confidentiality of data. Data were analyzed using the software, Statistical Package for Social Science (SPSS Inc. Released 2009, PASW Statistics for Windows, version 22.0: SPSS Inc., Chicago, Illinois, USA). Frequency distribution as percentage and descriptive statistics in the form of mean and standard deviation were calculated. One-way ANOVA and independent t-test were used to compare sexual function scores. Correlation and regression analyses were performed when appropriate. P values of less than 0.05 were considered significant. Of the studied predictors for FSD, age, marriage duration and number of pregnancies correlated negatively with all domains (except pain) of ArFSFI and its total score (p<0.05). Negative correlations have also been noticed between BMI and each of desire, arousal and lubrication domains (p<0.05). Unemployed women had lower scores of desire and arousal (p<0.05). No significant correlations have been found between circumcision and any of the studied domains (p>0.05) ( Table 2). RESULTS Using multivariate analysis, all the predictors of FSD detected by univariate analysis were found to be potential risk factors (p<0.05) ( Table 3). DISCUSSION FSD is a multidisciplinary disorder that affects the physical, mental and social status of women. In the current study, the association between FSD and many socio-demographic and gynecological characteristics have been investigated. Older women were more likely to have lower ArFSFI scores and age correlated negatively with desire, arousal, lubrication, orgasm and satisfaction (p<0.05). Previous national and international literatures have confirmed this negative correlation. 5,10,11 Lower estrogen production and vulvovaginal thinning and dryness can explain the association between old age and FSD. 10,11 Besides, increased BMI in this study could be linked to higher probabilities of FSD (p<0.05, r=-0.197). Obesity can be a primary cause of FSD; however it is also associated with metabolic syndrome, diabetes, and cardiovascular disorders; factors that lead to impaired sexual functioning. [12][13][14] In consistence with our findings, previous studies suggested negative correlations between BMI and sexual functioning, however others did not. 5,[15][16][17] FSD attributed to obesity can be explained by the hormonal imbalance caused by insulin resistance, the atherosclerosis of the vasculature supplying the genitalia, in addition to the psychological incompetence caused by low self-esteem and lack of confidence due to imperfect body image. 2,18 Our results showed that number of pregnancies were associated with FSD which agreed with previous reports. 19,20 Also, women who reported more years of marriage had more FSD. This may be due to the fact that women who are married for longer periods are supposed to be older and have more children. In addition, more years of marriage carry extra burdensome tasks that interfere with sexual functioning. What is really surprising about our findings that although the negative impact of circumcision on the sexual life of women have been heavily studied and documented, we did not find any considerable differences between the circumcised women and the uncircumcised women regarding any of the studied domains of ArFSFI. 21,22 This may be attributed to the fact that most of the circumcised women in Egypt were exposed to type I female genital cutting, which includes partial clitoridectomy instead of the complete excision of clitoris. 22 In Egypt, Thabet concluded that type I female genital cutting did not affect women's sexual desire. 23 Desire, arousal, orgasm, satisfaction and pain scores did not show considerable differences between circumcised and uncircumcised women in a hospital-based study on Egyptian women or in a recent study over 150 overweight and obese women from Upper Egypt. 5,24 Though statistically insignificant, lower educational level in our study was associated with FSD. Previous studies showed that women with lower levels of education were more likely to experience FSD. 3,25 We also demonstrated that the unemployed females had more FSD. It is likely that women with lower education and those who have no job are subjected to stressful economic conditions that may interfere with sexual functioning. In conclusion, several risk factors of FSD have been detected in our study. Further research should focus on the adaptive techniques used to mitigate the impact of FSD. Barriers preventing women with FSD from seeking treatment should also be investigated.
2019-03-17T13:11:47.245Z
2018-01-24T00:00:00.000
{ "year": 2018, "sha1": "bdcc82d9401c50ab8df9f4bcff604c23fe8a7253", "oa_license": null, "oa_url": "https://www.ijcmph.com/index.php/ijcmph/article/download/2432/1763", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f750aba025870b43cc50d11b14325392f5229011", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
269030273
pes2o/s2orc
v3-fos-license
Versatile Nanoscale Three-Terminal Memristive Switch Enabled by Gating A three-terminal memristor with an ultrasmall footprint of only 0.07 μm2 and critical dimensions of 70 nm × 10 nm × 6 nm is introduced. The device’s feature is the presence of a gate contact, which enables two operation modes: either tuning the set voltage or directly inducing a resistance change. In I–V mode, we demonstrate that by changing the gate voltages between ±1 V one can shift the set voltage by 69%. In pulsing mode, we show that resistance change can be triggered by a gate pulse. Furthermore, we tested the device endurance under a 1 kHz operation. In an experiment with 2.6 million voltage pulses, we found two distinct resistance states. The device response to a pseudorandom bit sequence displays an open eye diagram and a success ratio of 97%. Our results suggest that this device concept is a promising candidate for a variety of applications ranging from Internet-of-Things to neuromorphic computing. Structural Images Process control chips were fabricated in parallel with the devices presented in the manuscript.They have the same layers and thicknesses as the main chip (from bottom to top: Si / 200 nm SiO 2 / 3 nm Ti / 47 nm Pt / 4 nm SiN / 2 nm TiN / 1 nm Cr / 24 nm Ag / 17 nm Pt / 3 nm Cr).They served to validate the processes such as the physical etching step without the risk of damaging the main chip.A cross section of such a process control chip is shown in Figure S1 and shows the layer stack of 50 nm bottom electrode (BE, Ti/Pt), 6 nm nitrides, 44 nm top electrode (TE, Cr/Ag/Pt/Cr).On top, the remains of the resist, hydrogen silsesquioxane (HSQ), can be seen.The thicknesses were extracted from the SEM image using ImageJ.On the main chip, test structures were fabricated for non-invasive checks during the fabrication process.In Figure S2, such a test structure to measure the width of the bottom electrode and verify the contact pad resistance is shown.The planned width was 70 nm.The measurements were done with the SEM directly and show the actual fabricated devices to be slightly narrower. Two-Terminal Set Time We evaluate the set time of the two-terminal section of our devices by adding a series resistance set 10 kΩ and applying input voltage pulses of 4 V. Two read pulses of 0.25 V were used to measure the device state before and after the set pulse.An exemplary measurement is shown in Figure S3.Using the definition laid out by Lübben et al. 1 , we evaluate as the time between the two rising edges at 50% height.Evaluating set for 3 devices, we find a mean value of . set 1.0 μs Retention Pulsed retention measurements were carried out using the same bias voltage and series B = 1.5 V resistance as in the pulsed measurements shown in Figure 4.A transimpedance amplifier (TIA) S = 1 MΩ was used to amplify the current and translate it into a voltage.A read pulse of amplitude and 0.1 V 1 ms duration was followed by an intentionally long set pulse of duration to mimic longer operation as 125 ms used in the manuscript.We measured the state of the device using twenty read pulses ( ) with 0.1 V, 1 ms 1 of wait time in between.We find that the device turns off in less than In Figure S4 Full I-V Cyclic Voltammetry The full data shown in Figure 1 and 2 of the manuscript is reproduced here.The current is plotted in logarithmic scale.All cycles are plotted in a light color, while the median cycle is highlighted in a bold color. Certain cycles exhibit currents above 10 nA for negative source-drain voltages .However, this SD observation alone does not imply nonvolatility, especially when considered alongside the short retention times depicted in Figure S4 of the Supporting Information "Retention" and the data presented in "Reset Voltage Analysis" in the Supporting Information. The spread in the observed onset of the device is in the typical range for Ag-based systems. Reset Voltage Analysis The reset voltage was extracted from the measurements shown in Figure 2 S1.The low reset voltages fit well with the observed low retention. Continuous Gate Change Measurements Using the IV measurement setup used in Figures 1 and 2, we apply a triangular voltage signal on the gate ( ), while keeping the source-drain voltage constant and the drain grounded, as shown in GD SD Figure S7(a) in purple and blue respectively.We perform this sweep two times and find that the gate voltage can turn the device on and off in a controllable way.In Figure S7 .We observe that negative gate voltages can turn the device on. GD Leakage Currents Leakage currents between gate and source or drain electrodes have been measured.To this end, we applied the voltages as used in our experiments for , thereby gathering seven data points with an 2 s integration time of for both and .We repeated these measurements on three devices.The 40 ms SG DG mean current of all data points is shown in Table S2 Gate Influence on the Resistance State Retention Gate influence on the resistance state's retention was investigated using the same setup as in Figure 4-6 of the manuscript.The bias voltage was held constant while a rectangular voltage signal was applied to B the gate ( ).After several pulses, the gate was set to floating, leading to a disconnected , as GD GD illustrated in Figure S8( Figure S1 : Figure S1: Cross-sectional scanning electron microscopy (SEM) image of a control chip fabricated in parallel with the device chip.It shows the layers (BE/nitrides/TE) with their corresponding thicknesses (50 nm/6 nm/44 nm).A layer of resist (HSQ) is left on-top of the TE.One can see a steep etching of the layers.The thickness extractions were performed with ImageJ. Figure S2 : Figure S2: Top view of an SEM image showing the actual measured width of the 70 nm structured bottom Pt wires. Figure S3 : Figure S3: Pulsed set time measurements with a series resistance of 10 kΩ and an input voltage pulse amplitude of 4 V. (a) Readset-read operation.In blue, the incoming voltage pulse is shown.The read pulse has a height of 0.25 V, the set voltage pulse of 4 V.In green, the voltage between the series resistance and the device-under-testing is shown.During the first read pulse, the device is in off-state.During the application of the set pulse, the device turns on.Finally, the device is still in the on-state during the second read pulse.Between the flanks, only every 20 th data point is plotted for legibility.At the flanks, all data points are shown.(b) A closer look at the set operation.In this example, the set time was evaluated to be 1.08 . Figure S4: Pulsed retention measurements using the same bias voltage and series resistance as in the pulsed = 1.5 = 1 measurements shown in Figure 4 of the manuscript.(a) The bias voltage supplied by the AWG is shown.The read pulses have an amplitude of and duration of .After a first read pulse, a long set pulse and 20 read pulses with wait time 0.1 1 125 1 in between follow.(b) We amplify the current after the device with a TIA and measure the output voltage.(c) End of the set pulse and the twenty read pulses are shown in detail.The AWG voltage (blue) and the measured TIA voltage (green) show that the device turns off after about .37 2 - 4 Figure Figure S5: Full IV sweeps of the measurements shown in Figure 1 and 2 of the main text.All cycles are shown in light color, while the median cycle is highlighted.(a) Two terminal IV sweeps with a floating gate.(b) Three terminal IV sweeps with gate voltage (-1 V, +1 V) shown in blue and orange, respectively. and Figure S5(b).It was evaluated as the voltage, where the device resistance with being the mean of the DUT > 10 on on resistances at and the next 9 data points.A histogram of the found reset voltages is shown in set reset Figure S6 and the statistical data is presented in Table Figure S6 : Figure S6: Reset voltage evaluation of the gate biased measurements shown in Figure 2 and Figure S5(b).The reset voltage is extracted at the point where . > 10 Applied Gate Voltage = + = - Mean 0.1431 V 0.0755 V Median 0.1250 V 0.1250 V Standard Deviation 0.1250 V 0.0954 V (b), the current response of SD the source-drain contact is plotted against time.A compliance current of was set.In cc = 100 nA Figure S7(c), the same data is plotted against the gate voltage showing an onset for negative gate voltages. Figure S7 : Figure S7: Continuous gate voltage sweep with the SD voltage kept constant.(a) In purple, the triangular gate voltage signal is shown ( ).In blue, the constant source-drain voltage is shown ( ).(b) Overlay of the GD ∈ [ -2.5 , + 2.5 ] SD = 0.4 source-drain currents of two sweeps over time.One can observe a turn on in both cycles at around 70 s.(c) is plotted as a SD function of.We observe that negative gate voltages can turn the device on. GD a), where the transition from solid to dashed purple line indicates this event.Throughout this process, remains unchanged and is shown in blue.The measured source-drain voltage B , shown in Figure S8(b), was used to calculate the device resistance , depicted in Figure S8(c). SD DUT We evaluate the retention time ( ) as the time between the end of the last gate pulse and the subsequent drop of .The resistance drops by > 50 % in when no gate signal is applied, as shown in DUT = 100 μs Figure S8(d).To derive this, three resistance states are extracted from the data: the mean resistance VG + when applying a positive gate bias, the resistance with floating gate, and the midpoint of these float half two values.The gated retention time is defined as the time difference between the last point with the connected gate and the first point after the resistance has dropped below . half This time scale agrees with the RC-time of the used setup, as discussed in the Method Section. Figure S8 : Figure S8: Resistance state retention when the gate is set to floating.(a) The bias voltage , in blue, is held constant whereas the gate voltage , in purple, is first pulsed and then disconnected.This is depicted by the transition from a solid to a dashed line. (b) The source-drain voltage is measured and used to calculate the device's resistance state.(c) The calculated device resistance .During the rectangular gate pulses, the device resistance switched between two states.Once the gate contact is disconnected, the resistance relaxes to an intermediate state.(d) Zoom-in to the retention measurement.The time between the gate is disconnected and the resistance is falling below is extracted as the gated retention time . half = 100 μs Table S1 : Statistical data evaluation of the presented reset voltages. Table S2 : below.Hence, leakage currents to the gate are significantly below source-drain current in compliance (). SG , DG ≪ 100 nA ≤ SD,cc Leakage current data.In the left column, the applied voltage between the corresponding contacts is shown.All currents are below or equal to 1.6 nA, which is considerably less than the source-drain currents in the gated measurements.
2024-04-11T06:17:54.340Z
2024-04-09T00:00:00.000
{ "year": 2024, "sha1": "a11a08469c17edc2674420524c26a80346e55271", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsnano.3c11373", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4cc83476dd3be762789f2fc7bbacb421cb1598f4", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
151366315
pes2o/s2orc
v3-fos-license
Bnglish Language Teaching Materials and Cross-Cultural Understanding : Are There Bridges or Divides ? English Language Teaching (ELT) materials can contribute immensely to cross-cultural understanding in the emerging globalised and borderless world. This is because there are common denominators present when materials are used in the teaching of the language across borders. An attempt to teach a language, for instance, must also consider the ways or the contexts in which it is used. Thus language cannot be detached from culture. The textbook or coursebook has been "standard equipment" for teachers for decades, maybe centuries. However the culture ofthe target language has hardly ever been associated with the learning ofthe language within textbooks. This paper discusses the importance in which the essential elements of language and communication and culture in textbooks contribute towards language competency and cross-cultural understanding. The writer also discusses the relevance and the importance of the awareness ofthe connections between language and cross-cultural understanding amongst curriculum developers and materials developers and by that critical incident cited in Paul Simon's book The Tongue-Tied Ameri- can (1980).In that book Simon recounts the incident where a member of the Georgia school board approached Genelle Morain of the University of Geor- gia with the question: "Why shouid a student who will never leave Macon, Georgia, study a foreign language?"Her reply to that question was: "That's why he should study another language" (p.76).In Malaysia likewise, some politicians (in the 3 decades after independence) even questioned the need for English as an important second language.Obviously these people be- lieved that Malaysians would live as they would.depending on the first language (Bahasa Melayu) for their work needs.BLit all that changed with the burst of work in computer-related industry.There was a huge need for peo- ple who could communicate in English effectively as the country became rnore dependent on the international community for business.Today young Malaysians not only have to deal with international businessmen in the co'rintry, they frequently make overseas trips for business purposes.The problem of finding enough Malaysians competent enough to do business in Iinglish and compete with their counterparts in developing countries led to a drastic change in the School Curriculum.Today the teaching of Mathematics and Science in both Primary and Secondary schools is done in English, so as to provide the immersion into English in the early years.There are many people who now increasingly believe that culture should be taught within the language curriculum.The new foreign language standards (Standards 1996), emphasizes the need to "integrate" it within the new language curriculum.The importance for teaching culture is widely believed to promote greater cross-cultural understanding.The most important reason, however for most people as to why culture should be integrated within language curriculums, is, because language and culture are inseparably intertwined. WIIA'I'CAN BE THE PROBLEMS IF CULTURE IS NOT INTEGRA.'I'1.:I)INTO LANGUAGE WITHIN TEACHING MATERIAL? I'here will be several problems that we can anticipate if culture is not inlcgrated into teaching material.Some of the more serious ones will incl ude: l.The inability of learners to fully assimilate meaning within contexts of language use 2.The inability of the materialto promote o'realism" 3. The inability of the material to bring about "immersion" into the new world which will leave bias, stereofyping and prejudice behind In the next section of this paper, the writer will illustrate with examples of how each of the three problems come about and the implications of this on materials within the learning-teaching situation.Suggestions will also be provided on how culture can be integrated into language teaching materials. THE INABILITY OF LEARNERS TO FULLY ASSIMILATE MEANING WITHIN CONTEXTS OF LANGUAGE USE Language teaching has in most parts of the history of ELT been nothing but focus on exercises presenting language for practice in make-belief situa- tions.But not many people realize the folly of excessive focus on analytic or studial as opposed to experiential learning.The weaknesses ofexcessive focus on analytical methods was exposed as early as in 1904 by Jesperson in his text o'How to teach a foreign language", where he said that "we ought to learn a language through sensible communications" (p.ll).What Jesperson wanted was for teachers to move away from language practice on random lists of disconnected sentences to discourse which is connected to thoughts communicated.This i904 exposure by Jesperson was too far ahead of its time and the period of Practice, Practice, Practice went full steam ahead, for seventy five years, until after Widdowson (197S) and Siager (1978) re- emphasized the need for "context" and "longer, more natural discourse" as a basis for language teaching.What Widdowson and Slager advocated was teaching which totally put a stop to, or paid minimal emphasis on monoto- nous drills and endless repetitions.They revealed that our textbooks are filied with exercises which have students do drills on disconnected sen- tences.A negative aspect of these exercises is that they are unnatural and The above example is typical of dialogues in school textbooks, which basically achieve what it sets out to do; which is confine dialogue practice to two person interactions in an office, have the players roll out utterances without any of the interferences that come with natural discourse and hope- lully let all these register in the heads of learners after sessions of practice. Most teachers are unaware that "textbook language" as in the example above put learners at a distinct disadvantage when they are faced with interaction witlr native speakers.ln most situations, especially at the workplace, the lan- guage is dynamic.A close match to an office situation where natural lan- guage would be used would be one such as this: l)ialogue B I'm going down for a cup of coffee. Please John, one for me.White or Black?White and two sugars please (interrupting) Aaahem.., I heard that.I thought you said you were on a diet. But that new coffee dcwnstairs is so bitter without sugar OK two sugars Janet Can I have a cup too.Black, and no sugar I have only two hands Steven.Go get your own.The difference between dialogue A and dialogue B is that B is longer and is closer to natural conversation with interuptions, and all the other pe- ripheral aspects of natural discourse which include things like hesitations.Dialogue B is also closer to the type of discourse that native speakers and near-native speakers engage in.If the objectives of a language curriculum are geared at getting learners to master the language so that they achieve at least near native speaker competencies or even close to that, then the lan- guage as represented in Dialogue B should be more common in ELT materials.But is this possible with space constraints in ELT textbooks and course- books?Most probably not.But there are ways to overcome this prcblem of space constraints and one why is to not treat the textbook as the only source of material for teaching.Experts in materials development now say that the core material for teaching (in most cases the textbook) should cater for exer- cises that focus on language forms while peripheral material (like audio cDs and CD-ROMs and videos) should focus on authentic materials with open- ended interactive communication.In this way both analytic and experiential aspects of learning merge.Stern (1990;99) explains that an analyic ap- proach is one in which the language is the object ofthe study, and an experi- ential approach is one in which the language is learned in communication.Allen et al. (1990:77) feel that these two types of teaching may be comple- mentary and would "provide essential support for one another in the L2 classroom".An analytic focus in teaching decontextualizes linguistic fea- tures to allow for isolation of the forms for analysis and practice.The forms under study however should be recontextualized by means of experiential approaches.Recontextualization can be achieved ifteachers provide activities using language which not only involves grammar but also the functional, organizational and sociolinguistic aspects of the target language.One way recontextualization can be a reality in classrooms would be by getting stu- dents to view scenes in videos and cD-RoMs where natural communication which incorporates the culture of the target language is taking place, after they have had analytical exposure to the forms ofthe language. THE INABILITY OF THE MATERIAL TO PROVIDE *REALISM' Textbooks are a cultural disaster in terms of realism.Most of the time, they not only neglect representation of the culture of the target language, they in fact have established themselves into a variety of language that is Mukundon, English Language Teaching Materials and Cross-Cultural Llnderstending 47 tlistinctively independent-one which can be regarded as "textbook culture".Some teachers regard textbooks as breaking rules ofnatural language use as thcy lack in both situational and linguistic realism.when texts lack in real- isnr of this nature, they are detached fiom not only the culture of the target language but the first language as well. Situational realism is achieved in materials if two main criteria are ful- lillcd; age and interest.This would mean that texts and tasks relate to the age rrrrd interests of the target leamers.Most often then not, the culture (from the pcrspective of the broader sense of the word) of target learners, while they vrrry across boundaries will have commonalities, especially if one looks at rlrc "common behavior related to developmental stages" and assoeiates that rvitlr the "culture of learners and learning".vy'e are well aware of what has lrccn written about the predictabie psychology of young adult leamers irom rcscarch done extensively in the past, but unfortunately very little of that lrlrrslates to realism in materials for ELT.Textbook writers blatantly ignore tlr."gpl1g1s" of young adult learners by constantly falsiffing culture.Texts .ulrlactivities rarely account for such behavior as teenage restlessness and rt'lrt:lliousness, the end result of which our textbooks lack in situational real- r':rrr.Some examples of this found in Malaysian secondary school textbooks ;rrc illustrated below in Dialogues C and D: Dialogues such as these are common in systems where the agenda for "moral indoctrination" is so strong that it encompasses the entire school cur- riculum.The sum effect of this approach however would be a lack of interest in the dialogues r,vhich, from the onset of the lesson would lead to low motivation levels thus raising resistance to material.If ever there was terminol- ogr created today, the one most apt io describe texts such as the ones above would be "pedagogical put-offs"!Excess concern with moral issues, have led material builders to create "mirages" of life.In the two scenes above, the young adult characters show very little sign that they are typical young aciults.In fact, they look like clones ofthe so far unattainable "perfect young adult"; what some circles within society want out of young adults.In typical situations involving young adults, the girls in Dialogue C would not have easily volunteered to do the cleaning job, and the young adult in Dialogue D would not be playing the "dutiful guide" to the father.Also, striking a con- versation on a school canteen is hardly ever done by fathers and teenaged daughtersl Another way in which realism becomes detached from teaching mate- rial is when the language of dialogues is made to look artificial.It is true that materials which are deficient in naturalness lack in "iinguistic realism".An example of a typical dialogue lacking in linguistic realism found in school textbooks is provided below, in Dialogue E: I called to find about our History homework.Are you doing it nnrrr? Yes, I am doing the homework now" There is a lot of work to do.I am not sure which exercise to do.Do we have to do Exercise 2? Yes.We have to do Exercise 2. Do we have to do Exercise 3? Yes.we have to do Exercise 3. Do we write the answers irr the textbook?Yes, teacher wants us to write the answers in the textbook.Do we write in pencil?Yes, we have to write the answers in pencil.Some teachers and material builders would consider dialogues such as the one above "necessary" for focused practice as the aim ofthese exercises would be to provide target structures with minimum obstruction from pe- ripheral or intruding structures that are normally associated with authentic or near-authentic dialogues.While practice such as this with "intensified focus" on target structures may provide practice, they may never lead to learning as narrow intensified practice only enables these structures to be retained in short-term memory.The biggest set back to classroom teaching that materi- irls such as these inflict on leamers is the "falseness" of language.Speakers o{'English, both native or non-native speakers do not "interrogate" their liiends over the telephone about homework as the dialogue above suggests.ln most cases young adults do not even bother asking each other about how tlrcy are, especially since they meet in class everyday.While the aim oisuch a dialogue would be to teach the affirmative, there are negative conse- quences that come about from using texts of this nature.Second language leamers are not exposed to "real" language and this may inhibit their devel- opment as proficient users of the language. THE INABILITY OF THE MATERIAL TO BRING ABOUT "IMMERSION' INTO THE NEW WORLD WHICH WILL LEAVE BIAS, STEREOTYPING AND PREJUDICE BEHIND Most people, especially teachers would assurne that ,,cultural irnmer- sion" takes care of itself when learners are taught a second or foreign lan- guage.This however has been considered myth after recent studies lHinkel, i996; Hymes, 1996) showed that Non-native speakers (hINS) in colleges anj universities in the United States and canada and other dnglish-sp-eaking countries "do not always follow the norms of politeness and appropriatenesi commonly accepted in their L2 communities despite having lived in those countries for several years" (Hinkel.200l).Textbooks rr"d in Second lan- guage (SL) and Foreign Language (FL) situations do not gradually expose leamers to sociocultural variables in language and as such mastery of lin- what makes a particular expression or speech act situationally appro- priate is not so much the linguistic form or the range of the speak".',tingristic repertoire, but the socioculturar variables, wtrictr u.. .urllyaddressed in explicit instruction.Partly tbr this reason, it is not un.o*.on to hear ESL learners say How is it going, what's up, or Later to peers, professors, and even university deans (Hinkel, 2001:44g). university professors in the united states for instance also constantly complain about "unprepared" NNS in tutorials when academic reading is as- signed to them.while Nalive Speakers (NS) master tlre reading (as th-ey are aware of task demands), NNS are unaware of the implicit natuie'of thetask which demands absolute mastery of the assigned reading.As a result of their lack of preparation, these NNS wiil give the professors iegative impressions of their academic skills and preparation. Learners' awareness of target language culture in most cases is also lacking and this is true even for advanced and proficient learners.Byram and Morgan (1994: 43) point out learners cannot iransform, or accommodate or evea effectively assimilate into other culture.They ,,cannot simply shake off their own culture and step into another".In the iase of Mainlani chinese, lvluh,indan, English Language Teaching Materials and Crass-Cultural Understanding 5l their strong tradition of copying from their teachers and texts (because they are "perfect") will be viewed as "plagiarism" from the perspective of Ameri- can culture.Many students from Mainland China are viewed as "cheats" by their peers and teachers because of their inability to step into the culture of the target language.The developers of their ELT material failed to see the importance of exposing them to this aspect of "learning culture" which is prevalent in NS learning environments. IMPLICATIONS TO MATERIALS DEVELOPMENT AND TEACH- ING There is no need for drastic curriculum revamps when attempting to introduce target ianguage culture into targei ianguage materials.All it needs is a little direction towards immersion into target language culture.Lafayette (1978, 1988) suggests 9 ways in which language and culture can be inte- grated. 1. Cultural lessons and activities need to be planned as carefully as language activities and integrated into lesson plans. 2. Present cultural topics in conjunction with related thematic units and closely related grammatical content whenever possible.Use cultural contexts for language-practice activities, including those that focus on particular grammatical forms, 3. Use a variety of techniques for teaching culture that involve speak- ing, listening, reading, and writing skills.Do not limit cultural instruction to lecture or anecdotal formats. 4. Make good use of textbook illustrations and photos.Use probing questions to help students describe and analyze the cultural significance oiphotos anci reaiia. 5. Use cultural information when teaching vocabulary.Teach students about the connotative meaning of new words, Group vocabulary into culture-related clusters. 6. Use small-group techniques, such as discussions, brainstorming, and role-plays, for cultural instruction. 7. Avoid a o'facts only" approach by including experiential and process learning wherever possible. 8, Use the target language whenever possible to teach cultural content.9. Test cultural understanding as careftilly as language is tested. The 9 ways suggested above clearly show that the teacher's initiative will go a long way into irnmersing learners into the culture of the target lan- guage.While ihere is very little that can be done to incorporate culture into textbooks, mainly because of the constraints of space and nationalistic de- mands, ieachers should bring into classrooms photos, pictures, audio tapes and video clips to emphasize language and culture in target language con- texts. CONCLUSION The teaching of a language must be aceompanied by the cultLrre tha-t surrounds ii.The most unfortunate part of "target language culture clean- sing" in ELT materials is that materials developers and teachers maliciously equate target language culture to extreme and often immoral sides of life, hence incompatible with the culture of the first language (Ll).This miscon- ception must be corrected so that the positive aspects of culture in the target language, those aspects which aid communication and toierance will find their natural place in the leaming of the target language and learners benefit from this. guistic iorm alone can lead to disastrous consequences when NNS face NS counterparts. Mukundan, English Language Teaching Materials and Cross-Cultural [Jnderstanding 45 contrived.None of these utterances are rarely ever heard within the local and the native speaker situation.A typical example of a short dialogue practicing forms and functions associated to making polite requests within a textbook would have two people in limited roles such as this: Look Siew Ling, the whole garden is full of fallen leaves.Yes.It does not look very nice.Our gardener, En.Ahmad is ill.He cannot come to work toda,v.Why don't we get some more friends and do the work for him.That's a good idea.Let's go.Your school canteen is very good.It is also very clean. Mukundan, English Language Teaching fulaterials and Cross-Cultural Understanding 49
2018-12-18T08:30:53.079Z
2015-09-03T00:00:00.000
{ "year": 2015, "sha1": "07d45aed562243a7bf6df5346be21f0814ad66c1", "oa_license": "CCBYSA", "oa_url": "http://journal.teflin.org/index.php/journal/article/download/239/225", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "07d45aed562243a7bf6df5346be21f0814ad66c1", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
255521598
pes2o/s2orc
v3-fos-license
Application of Response Surface Methodology for Optimization of the Biosorption Process from Copper-Containing Wastewater Copper-containing wastewater is a significant problem in the water industry. In this work, biosorption of copper ions on alginate beads have been considered as a promising solution. The effective diffusion coefficient De is the parameter describing the diffusion of copper ions in calcium alginate granules. Granules with a wide spectrum of alginate content from several to several dozen percent (0.6–20%) were tested. The granules with an alginate content of 20% were produced by a new method. The conductometric method was used to determine De. The study determined the De values depending on the process parameters (temperature and pH of copper solutions) and the alginate content in the granules. The RSM method was used to analyze the obtained results. The conducted research proved that all analyzed factors significantly affect the value of the diffusion coefficient (R2 = 0.98). The optimum operating conditions for biosorption of copper ions from CuCl2 salt, on alginate beads obtained by RSM were as follows: 0.57% of alginate content in the granules, temperature of 60.2 °C, and pH of 2. The maximum value of De was found to be 2.42·10−9 m2/s. Introduction Heavy metals can be easily absorbed by marine organisms and crop plants due to their solubility in water and accumulation in the human body [1]. When copper-containing wastewater is discharged into the environment outside the self-purification area, the high toxicity and non-biodegradability of copper ions pose a serious threat to animal and human health. High requirements for the quality of wastewater containing toxic metal ions, produced in various industries, force intensive research into all available methods for their purification [2]. According to WHO recommendations, the maximum allowable concentration of Cu 2+ in water is 2.0 mg/L [3]. In addition, recovering copper from wastewater also has some economic benefits [4,5]. We selected adsorbtion to treat copper-containing wastewater due to its advantages, such as low initial cost and process simplicity [6,7]. However, the instability of the adsorbents and difficulty in the separation still limit their practical applications. This encourages searching for inexpensive, renewable, and environmentally-friendly biosorbents [8]. Biosorption on materials of natural origin seems to be providing the most prospective results: in addition to it being highly efficient, it enables elimination of the entire content of metal ions, even if they are present at very low concentrations in the liquid waste. Alginate derived from brown algae is a highly popular material for the biosorption of heavy metals due to its advantages, such as low cost and high affinity via gelation [9,10]. Abundant functional groups have been found in sodium alginate, such as carboxyl and hydroxyl groups, which can crosslink with cations [11,12]. Sodium alginate reacts with divalent cations such as Ca(II), Ba(II) and Sr(II) to form insoluble hydrogels, which are crosslinked to form a reticular structure called the "egg box", and the crosslinking pathway is the exchange between the sodium ions of α-L-guluronic acid and divalent ions [13,14]. In addition, alginate beads can be easily recovered [15,16]. Consequently, calcium alginate is a promising biomaterial for the biosorption of heavy metals [4,17]. Biosorption of copper ions on alginate beads is influenced by various conditions such as temperature, pH of copper solutions, and the alginate content in the granules [18][19][20]. Optimization of the above-mentioned parameters is imperative in order to obtain high yields of biosorption at lower costs. A classical practice of achieving these is by single variable optimization methods, i.e., varying one factor at a time while keeping others at constant levels. This approach is not only tedious and time-consuming but can also lead to misinterpretation of results, especially because the interaction between different factors is overlooked. Hence, statistical experimental designs by response surface methodology (RSM) may be considered as an efficient way to deal with the limitations of the conventional method [21][22][23]. The efficiency of heavy metal removal can be influenced by various factors, such as: temperature, concentration of heavy metals or adsorbents, pH, etc. pH is a very important parameter because pH affects the chemical speciation of metals in solutions, as well as the ionization of chemically active sites on the surface adsorbent [24]. Response surface methodology (RSM) is a combination of statistical and mathematical techniques useful for investigating the interactive effects between several factors at different levels [25]. The experimental workload can be reduced by using statistical design. Among RSM designs, central composite design (CCD) is the most widely used approach for statistical process optimization [26][27][28]. This experimental methodology generates a mathematical model to estimate the connection between the variable and the response variables. So far, no one has studied the usefulness of this method in biosorption of copper ions on alginate beads. Literature data is fragmentary and inconsistent [4,11,18,19,29]. Keeping this in mind, the present work was carried out to optimize different process parameters (temperature and pH of copper solutions) and the alginate content in the granules for the efficient biosorption of the copper on alginate beads using RSM. Results and Discussion The RSM designs applied in our investigation have been successfully applied in many recent biotechnological applications, however, to the best of our knowledge, no single report was obtained on the optimization of biosorption of copper ions on alginates beads. Surface Morphology of Calcium Alginate Beads SEM was used to study the external morphology (size, shape, and surface) of the prepared beads. Randomly selected beads were studied. The images were taken at 60×, and 20,000× magnification. The SEM operating parameters were set at 20 kV to accelerate the voltage. The SEM photographs are depicted in Figure 1, which shows that the beads were almost spherical, with a rough outer surface. SEM analysis showed that the diameter of the beads was 26.3 mm. The photographs presented in Figure 1c,d show differences in the surface morphology of the prepared beads. The surface of 20.3% of the beads was rougher than 4.5% of the beads. Beads with a lower alginate content had a smoother surface. SEM photographs of the blank beads ( Figure 1d) compared with copper loaded bead (Figure 1e) also show a difference in surface morphology. Figure 1e indicates that the alginate matrix entrapped copper. The optimization of the water biosorption ions on alginates beads was carried out to find the optimal values of independent variables (temperature, pH, and alginate content in the granules), which would give maximum effective diffusion coefficient. The results obtained from the different experimental sets are presented in Table 1. On the basis of the CCD, the second order response surface model was obtained from Equation (7). The optimization of the water biosorption ions on alginates beads was carried out to find the optimal values of independent variables (temperature, pH, and alginate content in the granules), which would give maximum effective diffusion coefficient. The results obtained from the different experimental sets are presented in Table 1. On the basis of the CCD, the second order response surface model was obtained from Equation (7). ANOVA Analysis and the Adequacy of the Mathematical Model The optimization of the water biosorption ions on alginates beads was carried out to find the optimal values of independent variables (temperature, pH, and alginate content in the granules), which would give a maximum effective diffusion coefficient [25,30,31]. The results obtained from the different experimental sets are presented in Table 1. On the basis of the CCD, the second order response surface model was obtained from Equation (7). The results of the model in the form of analysis of variance (ANOVA) are given in Table 1. All coefficients in the full quadratic model were analysed with a T-test. The larger the magnitude of T-value and smaller the p-value, the more significant is the corresponding coefficient. The coefficients β 1 , β 2 , and β 3 were found to be significant (p < 0.05). Therefore, it can be said that the linear terms of all independent variables were significant to the model at 5% level of significance. The final estimative response model equation in terms of effective diffusion coefficient was: Model validity was confirmed with the coefficient of determination R 2 values. R 2 should be 0 < R 2 < 1 and larger values denote better results. The regression equation obtained indicated an R 2 value of 0.98. This value ensured a satisfactory adjustment of the quadratic model to the experimental data and indicated that 98% of the variability in the response could be explained by the model. For better evaluation of the model we also used absolute average deviation (AAD) and root mean square error analysis (RMSE) [32]. AAD and RMSE values close to zero are indicative of high accuracy between observed and predicted values. The AAD is calculated by the following equation: where Y i,exp and Y i,cal are the experimental and calculated responses, respectively, and P is the number of experimental runs. AAD and RSME values were determined as 0.05 and 0.08, respectively. R 2 together with AAD and RSME results indicate that the model is sufficient for estimation of the average of effective diffusion coefficient with high accuracy for all experimental points. A satisfactory correlation between experimental and predictive values is also shown by the predicted versus actual plot ( Figure 2). The clustered points around the diagonal indicate a good fit of the model. The predicted optimization result by the model suggested that the maximum effective diffusion coefficient (De = 2.42·10 −9 m 2 /s) for biosorption of copper ions on alginates beads could be achieved when temperature, pH, and alginate content in the granules were set at 60.2 °C, 2, and 0.57%, respectively. Test have been conducted [33], in which the maximum De was also obtained for the lowest alginate content in the granules. Three-Dimensional Response Surface Plots The interaction effects and optimum conditions of the independent variables optimized for enhanced biosorption process are presented by the 3D response surface plots shown in Figure 3. The contour plots were organized based on the quadratic model. Two variables were analyzed at a time while keeping other variables at fixed levels (center point). As is evident from the response surface plot shown in Figure 3a, the increase of temperature and decrease of pH led to corresponding linear increases gradually in the effective diffusion coefficient. The combined effect of temperature and alginate content in the granules is shown in Figure 3b. The effective diffusion coefficient increases for the higher temperature and for the lower concentrations of alginate. The presented research results are consistent with the previously published studies [33]. It was confirmed that the De coefficient increases with the decrease in the alginate content in the granules. The predicted optimization result by the model suggested that the maximum effective diffusion coefficient (D e = 2.42·10 −9 m 2 /s) for biosorption of copper ions on alginates beads could be achieved when temperature, pH, and alginate content in the granules were set at 60.2 • C, 2, and 0.57%, respectively. Test have been conducted [33], in which the maximum D e was also obtained for the lowest alginate content in the granules. Three-Dimensional Response Surface Plots The interaction effects and optimum conditions of the independent variables optimized for enhanced biosorption process are presented by the 3D response surface plots shown in Figure 3. The contour plots were organized based on the quadratic model. Two variables were analyzed at a time while keeping other variables at fixed levels (center point). As is evident from the response surface plot shown in Figure 3a, the increase of temperature and decrease of pH led to corresponding linear increases gradually in the effective diffusion coefficient. The combined effect of temperature and alginate content in the granules is shown in Figure 3b. The effective diffusion coefficient increases for the higher temperature and for the lower concentrations of alginate. The presented research results are consistent with the previously published studies [33]. It was confirmed that the D e coefficient increases with the decrease in the alginate content in the granules. Figure 3c shows that when both the reaction medium pH and alginate content in the granules decreased, the effective diffusion coefficient increased. Analysis with evaluation of the 3D response surface plots indicated the ranges of the optimum biosorption conditions as follows: temperature, 50-70 • C; alginate content in the granules, 2-6%; and pH of the reaction medium, 1.4-2.2. The presented research proved that all the analysed factors significantly affect the value of the diffusion coefficient. The new method of obtaining alginate beads made it possible to obtain beads with a high biosorbent content (up to 20% by weight). Due to their different structure, such beads are characterized by lower values of the effective diffusion coefficient (D e ) compared to beads with an alginate content of a few percentages. Besides the alginate content of the beads, environmental factors such as temperature and pH also influence the (D e ) value. Figure 3c shows that when both the reaction medium pH and alginate content in the granules decreased, the effective diffusion coefficient increased. Analysis with evaluation of the 3D response surface plots indicated the ranges of the optimum biosorption conditions as follows: temperature, 50-70 °C; alginate content in the granules, 2-6%; and pH of the reaction medium, 1.4-2.2. The presented research proved that all the analysed factors significantly affect the value of the diffusion coefficient. The new method of obtaining alginate beads made it possible to obtain beads with a high biosorbent content (up to 20% by weight). Due to their different structure, such beads are characterized by lower values of the effective diffusion coefficient (De) compared to beads with an alginate content of a few percentages. Besides the alginate content of the beads, environmental factors such as temperature and pH also influence the (De) value. Chemicals Sodium alginate was purchased from Sigma-Aldrich and calcium(II) chloride from Chempur. Sodium alginic acid was used to fabricate and calcium chloride for crosslinking the alginate beads. CuCl 2 was purchased from POCH S.A. Avantor Performance Materials Poland SA. To adjust the pH of the solution, 1 M hydrochloric acid (HCl) and 1 M sodium hydroxide (NaOH) were applied, which were acquired from POCH S.A. Interventionary studies involving animals or humans, and other studies that require ethical approval, must list the authority that provided approval and the corresponding ethical approval code. Preparation of Calcium Alginate Beads The starting material for the preparation of the beads was a low-viscosity sodium alginate. In spite of the fact that the alginate itself had a low viscosity as compared to that of other alginates, viscosity of its aqueous solutions was so high that the maximum attainable calcium alginate concentration in the beads was only 4.5% wt. Sodium alginate powder was added to the distilled water to obtain a viscous sodium alginate solution. To the obtained spherical biosorbent the aqueous solution of sodium alginate was added drop-wise under gentle stirring, to a 0.05 M CaCl 2 solution used as a cross-linking medium. The beads were formed immediately. During a 30-min gelation period, Ca(II) ions were bonded to the alginate beads and the sphere became compact. To attain an equilibrium between Ca(II) in solution and the ions adsorbed on the beads, they were placed in a 0.05 M CaCl 2 solution for 24 h. Alginate beads were stored in a refrigerator in a solution containing 0.01 M of KCl and 0.001 M of CaCl 2 . Before using them, the gel beads were rinsed with distilled water several times to remove free Ca(II). Producing granules containing a greater concentration of alginate was a problem. The methods so far known have made it possible to get gels of concentrations up to 6.5% wt. The high viscosity of the aqueous solutions of sodium alginate is an obstacle here. A new method of granules preparation containing high concentration of biosorbent has been elaborated and is based on using a sodium alginate suspension. This method is described in detail in Ref. [29]. The beads production started with the preparation of the alginate suspension. Ethyl alcohol of the 96% vol. concentration was used to prepare the suspension. At first, ethyl alcohol was mixed with distilled water at the appropriate proportions. Then, a suitable amount of sodium alginate was added to the previously prepared mixture. The obtained suspension, after careful stirring, was dripped into the crosslinking solution (0.18 M CaCl 2 ). Mechanical extrusion was used. The formed beads were maintained for 24 h in the 0.18 M CaCl 2 solution and then kept in a refrigerator in a 0.01 M KCl solution. Before use, the gel beads were rinsed with distilled water several times. By making use of such a method, three types of the alginate granules with different alginate contents (10.2, 16.0, 19.9% wt.) were produced. The properties all of the obtained granules are listed in Table 2. Prior to carrying out the measurements of the effective diffusivity by the conductometric method, the calcium(II) ions in the beads were substituted by the copper(II) ions. To do this, an appropriate quantity of the calcium alginate beads (the total volume of the beads of sorbent was 1 mL, to satisfy the condition α ≥ 100) was placed in a 0.1 M CuCl 2 solution and the mixture was stirred with a magnetic stirrer. After a 2-h saturation period, the beads were transferred to fresh volume of the 0.1 M Cu(II) solution where they were stirred for 24 h. After saturation with Cu(II), the beads shrank in volume by ca. 8%, most likely owing to the exchange of the calcium(II) ions for the copper(II) ones. Conductometric Method This method (called conductometric) and the setup for determination of the effective diffusion coefficient De was described in detail by Kwiatkowska-Marks and Miłek [33]. The method was based on carrying out the measurements in a closed system. In a beaker with distilled water, a known quantity of a Cu(II)-saturated alginate was placed and the suspension was vigorously agitated to eliminate the resistance of internal diffusion and to ensure ideal mixing in the system. Under these conditions, Cu(II) ions present in the pores of the sorbent diffuse to the distilled water, with the rate of the process being controlled by effective diffusivity. The increasing Cu(II) concentration in the solution results in the increase in conductance, which is measured with a conductometer. Under assumption that the sorbate is uniformly distributed within the whole bead, and that the beads are at equilibrium with the liquid phase, and by selection of the appropriate experimental conditions (α ≥ 100, i.e., the volume of the sorbent is at least 100 times smaller than that of the distilled water), then we can use the equation derived for the open system where C t is the sorbate concentration in solution at time t, C ∞ is the sorbate's equilibrium concentration in the solution, D e is the effective diffusion coefficient, and R is sorbent bead radius. Because in the new conductometric procedure the determination of the effective diffusivity is based on the measurement of conductance of a solution into which the sorbate diffuses (under assumption of a linear relationship between the conductance and concentration), transformation of the equation for the transient diffusion leads to Equation (5). where P t is the conductivity of the solution after time t and P ∞ is the conductivity of the solution after time ∞. The apparatus for the determination of the effective diffusivity consists of a 120-cm 3 beaker, a thermostated water jacket, a magnetic stirrer, a thermometer, and a conductometer with an electrode. Conductance of the solution was measured on a microcomputer-assisted CPC-551 (ELMETRON) conductometer. Copper(II)-saturated beads of a volume smaller than 1 cm 3 were placed in 100 cm 3 of distilled water, simultaneously starting the magnetic stirrer and a stop-watch. The temperature was held at 25 ± 0.5 • C throughout. Conductance of the solution was measured at set time intervals up to reaching the constant readings on the conductometer (usually within 60 min). Experimental Design The research program was designed in such manner that it was possible to obtain the necessary information by performing the least number of analysis possibilities. CCD was used to compare the interactions between the various variables after they were coded. The variables were coded according to the following equation: where x is the natural variable, X is the coded variable, and x max and x min are the maximum and minimum values of the natural variable [30,31]. Each factor was examined at five different levels coded as shown in Table 3. For the three independent variables, i.e., reaction medium temperature (X 1 ), reaction medium pH (X 2 ), and alginate content in the beads (X 3 ), CCD is composed of 20 experiments. The design includes eight basis points that are a single run for each of the −1 and 1 level combinations, six replications of the center points (all factors at level 0), and the six star points, that is, points having one factor for the axial distance to the center of ±α, whereas the other two factors are at level 0. The axial distance α was chosen to be 1.682 to make this design orthogonal. The experimental design used for the study is shown in Table 4. All the experiments were done in triplicate and the average of effective diffusion coefficients D e ·10 −9 [m 2 /s] obtained was taken as the dependent variable or response (Y). Fitting the experimental data can be described by using a second-order polynomial response surface model: where Y is the predicted response (the average of effective diffusion coefficient), terms of β 0 , β i , β ii , β ij , and ε illustrates the offset term, the linear effect, the squared effect, the interaction effect, and the residual term, respectively. X i and X j represent the coded independent variables [30,31]. The predicted polynomial model was analyzed using the response surface regression procedure. The coefficients of Equation (7) were determined using STATISTICA software. Determination of the Point of Zero Charge (pH pzc ) The point of zero charge (pH pzc ) is defined as the pH of the solution at which the charge of the positive surfaces is equal to the charge of the negative surfaces, i.e., the surface charge of the adsorbent is zero [34]. The surface charge is negative at pH > pH pzc and positive at pH < pH pzc [35]. The zero-charge point (pH pzc ) of the solid adsorbents was determined by 50 mL of 0.01 M NaCl solutions (being the background electrolyte), which were placed in several closed Erlenmeyer flasks. The initial pH (pH i ) in each flask was adjusted to a range of 2 to 12 by adding HCl (0.1 M) or NaOH (0.1 M) solutions. The pH was measured with a pH meter. An aliquot of the sample (0.1 g) was then added to each flask. The flasks were shaken at 300 rpm for 24 h at room temperature. The final pH (pH f ) was then measured. pH pzc is the point where pH f − pH i = zero. The pH pzc obtained for the alginate beads was 6.3 and is comparable to the literature (6.2) [36]. Conclusions The RSM method can be used to optimize the sorption of copper ions on calcium alginate granules. Based on the research, a conclusion can be drawn, stating that the increase in the temperature of the copper sorption process on the alginate biosorbent increases the value of the effective coefficient of copper ion diffusion in the granules. Since sorption of Cu (II) on calcium alginate granules occurs better at acidic pH, it was decided that the tested pH range would from 1.5 to 4. In the tested range, an increase in pH caused a decrease in the D e value. Regardless of the process temperature, the highest D e value was obtained for the lowest pH. Therefore, the sorption of Cu(II) ions is recommended to be carried out at the highest possible temperature and the lowest pH. Under these conditions it is possible to obtain the highest D e , and it is therefore the most efficient part of the process. Author Contributions: Conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing-original draft preparation, writing-review and editing, visualization, supervision, project administration, funding acquisition, I.T. and S.K.-M. All authors have read and agreed to the published version of the manuscript.
2023-01-09T05:08:10.609Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "0352201971c3b3fa069c37a15920d03dae92de04", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/1/444/pdf?version=1672742037", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0352201971c3b3fa069c37a15920d03dae92de04", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
237592107
pes2o/s2orc
v3-fos-license
Factors Associated with Mechanical Complications in Intertrochanteric Fracture Treated with Proximal Femoral Nail Antirotation Purpose Although proximal femoral nail antirotation (PFNA; Synthes, Switzerland) has demonstrated satisfactory results when used for the treatment of intertrochanteric fractures, mechanical complications may occur. To better quantify the risk of mechanical complications when proximal femoral nail antirotation is used to treat intertrochanteric fractures, this study aimed to: (1) characterize the frequency of mechanical complications and extent of blade sliding and their correlation with reduction quality and (2) identify factors associated with mechanical complications. Materials and Methods A review of medical records from 93 patients treated for intertrochanteric fractures with a minimum of 6-months of follow-up between February 2014 and February 2019 was conducted. Blade position was evaluated using Tip-apex distance (TAD) and Cleveland index. The extent of blade sliding was evaluated using the adjusted Doppelt's method for intramedullary nailing. Individuals were classified as having or not having mechanical complications, and reduction quality and radiologic outcomes were compared between the two groups. Results Mechanical complications occurred in 12 of 94 hips (12.8%), with 11 out of 12 being from the intramedullary reduction group. There was no significant difference in TAD between groups; however, there were significant differences were noted in Cleveland index, AO/OTA classification, reduction quality and extent of blade sliding. The mean blade sliding distance was 1.17 mm (anatomical group), 3.28 mm (extramedullary group), and 6.11 mm (intramedullary group), respectively (P<0.001). Data revealed that blade sliding was an associated factor for mechanical complications (odds ratio 1.25, 95% confidence interval 1.03–1.51). Conclusion The extent of blade sliding determined using the adjusted Doppelt's method was significantly associated with mechanical complications suggesting that prevention of excessive sliding through proper intraoperative reduction is important to help achieve satisfactory treatment outcomes. INTRODUCTION The number of osteoporotic hip fracture increases in the elderly population as life expectancy increases. Additionally, the incidence of osteoporotic hip fractures has also increased over the years 1,2) . The 1-year mortality rate following hip fractures in previous studies ranges from 8.4% to 36% 3) . The goal of treatment for hip fractures is to lower mortality rate through early recovery of ambulatory function 4,5) . Patients with intertrochanteric fractures tend to be older and have more severe osteoporosis compared with patients with femoral neck fractures 6,7) . Although some studies report that arthroplasty is better for early rehabilitation, internal fixation using various devices is currently the treatment of choice for intertrochanteric fractures 8) . Cephalomedullary nailing has several advantages (e.g., shorter operating time, biomechanical stability) for the treatment of intertrochanteric fractures 9, 10) . Among several types of cephalomedullary nails, proximal femoral nail antirotation (PFNA; Synthes, Solothurn, Switzerland) is characterized by the anti-rotation helical blade which is more resistant to rotational deformity compared with lag screws 11,12) . Although the PFNA system has demonstrated satisfactory results, the rate of mechanical failure including nonunion, cut-through or cut-out, excessive migration of blade, periimplant fracture and implant breakage ranges from 2.6% to 13% 13,14) . For the prevention of mechanical complications, appropriate fracture reduction and blade position are essential 15,16) . Anatomical reduction or extramedullary reduction with medial cortical overlap known as the Wayne-County technique are associated with biomechanical stability compared with intramedullary reduction 17) . Intramedullary reduction in comminuted intertrochanteric fractures without posteromedial cortical support is prone to excessive sliding and varus malposition of proximal fragment leading to mechanical failures. We hypothesized that the extent of blade sliding is different according to the reduction quality. In addition, we consider that excessive blade migration is a factor associated with mechanical complications. Therefore, the purpose of this study was to: (1) determine the proportion of mechanical complications according to reduction quality and (2) to identify factors associated with mechanical complications in patients with intertrochanteric fracture treated by PFNA. MATERIALS AND METHODS This study was approved by the Institutional Review Board (IRB) of Yeungnam University Medical Center, and the informed consent was waived by the IRB (YUMC 2020-03-012). A retrospective study was conducted at a tertiary referral hospital. Medical records and radiographs of patients who were surgically treated with PFNA for intertrochanteric fractures between February 2014 and February 2019 were evaluated. During the study period, all patients with intertrochanteric fractures who visited our institution underwent osteosynthesis surgery. Only patients treated with implants using a helical blade were included in this study. The inclusion criteria were: (1) over 55 years of age with osteoporotic intertrochanteric fracture due to low energy trauma like simple fall 18) and (2) minimum of 6month post-surgical follow-up. Patients with high energy trauma, reoperation, subtrochanteric or atypical femoral fractures were excluded. During the study period, 491 patients (492 hips) underwent intramedullary nailing at our institution for treatment of intertrochanteric fractures; all surgeries were conducted by a single surgeon. Among these patients, 62 were excluded (62 hips) because they were not treated by implants with a helical blade. Of the remaining 429 patients (430 hips) who underwent surgery using PFNA, only 93 patients (94 hips) had at least 6 months of post-surgical follow-up and were thus included in the final analysis ( Fig. 1). Of the 93 patients included, 22 and 71 were male and female, respectively; mean age at time of surgery was 77.6±7.8 years (range, 55.0-95.0 years). Mean body mass index was 22.2±4.0 kg/m 2 (range, 13.6-32.4 kg/m 2 ) and the mean duration from admission to operation was 1.11±1.5 days (range, 0-6 days). Operations were not delayed beyond 48 hours as except in the rare case that patients were not able to undergo surgery because of poor general condition. The medical status of patients was evaluated according to the American Society of Anesthesiologists (ASA) classification and Charlson comorbidity index (CCI). The median preoperative ASA classification and CCI were 2.5 (range, 2-4) and 4.5 (range, 2-8), respectively. All fractures were classified according to AO/OTA guidelines based on preoperative computed tomography scans; 31A2.2 was the most common classification (n=34 hips). The percentages of stable and unstable fractures were 40.4% and 59.6%, respectively according to the AO classification (31A1=stable; 31A2=unstable). Using radiographs gathered immediately after surgery, two independent orthopedic surgeons evaluated tip-apex distance (TAD) 16) , blade position in the femoral head (using the Cleveland index) 19) and classified fracture reduction quality into one of three categories (anatomical, extramedullary, and intramedullary) as described by Ito et al. 15) . Intramedullary reduction classification required observation in at least one of the anteroposterior and lateral radiographs. If the radiologic outcome was not in agreement, results were confirmed after discussion. The quality of reduction was anatomical reduction (n=23 hips), extramedullary reduction (n=25 hips), and intramedullary reduction (n=46 hips). Blade sliding was evaluated using the Doppelt's method adjusted suitably for intramedullary nailing comparing initial postoperative radiograph with the last follow-up (Fig. 2) 20) . To minimize measurement error, femur rotation was confirmed by comparing the size of the lesser trochanter immediately after surgery to the radiograph captured at final follow-up. Mechanical complications included non-union, cut-out or cut-through, excessive migration of blade, peri-implant fracture, and implant breakage 14) . Demographic data of patients according to reduction quality are summarized in Table 1. Statistical analyses were performed with univariate comparisons using independent t-test or ANOVA test for continuous variables and chi-square test for categorized data. Then, multivariable logistic regression analyses were performed to identify potential factors associated with mechanical complications. Differences were considered significant if P-values were <0.05. All analyses were performed using IBM SPSS Statistics for Windows (ver. 20.0; IBM, Armonk, NY, USA). RESULTS During the follow-up period, mechanical complications occurred in 12 of 94 hips (12.8%). Although TAD was measured as the mean 20.1±5.07 mm (range, 11-34 mm) in all patients, TAD exceeded 25 mm in 22 hips (23.4%). There was no significant difference in TAD between those with A B and without mechanical complications. Blades were inserted into a safe zone (Cleveland zones 5, 6, 8, and 9) in 82 hips (87.2%). The proportion of AO/OTA classification and Cleveland index was significantly different between those with and without mechanical complications (Table 2). Among the 12 hips with mechanical complications, bony union was achieved in 4 hips through a revision osteosynthesis operation within 6 months following the initial oper-ation; treatment failure occurred in 8 hips (8.5%). Conversion to hip arthroplasty was performed in 5 patients due to cutout or cut through and osteonecrosis of femoral head. An additional 3 patients refused to undergo additional operations. All patients achieved bony union at a mean of 7.2 months post operation except for those patients with treatment failure. The mean distance of blade sliding was 1.17 mm, 3.28 mm, Values are presented as mean± ±standard deviation or number (%). BMI: body mass index, ASA classification: American Society of Anesthesiologists classification, CCI: Charlson comorbidity index, PFNA: proximal femoral nail antirotation, CCD: centrum-collum-diaphyseal. and 6.11 mm in the anatomical reduction, extramedullary reduction, and intramedullary reduction groups, respectively (P<0.001) (Fig. 3). There were no cases of mechanical complications in the anatomical reduction group. Although excessive blade sliding (>5 mm) occurred in 1 hip in the extramedullary reduction group, bony union was achieved after blade exchange to a shorter one was performed. Most mechanical complications occurred in the 11 hips in the intramedullary reduction group. When patients were classified into two groups (i.e., those with and those without mechanical complications), a univariable analysis revealed significant differences in among the groups. Multivariable logistic regression analysis including these three factors (i.e., reduction quality, AO/OTA classification, extent of blade sliding), only the extent of blade sliding was associated with a mechanical complication after adjustment (odds ratio 1.25, 95% confidence interval 1.03-1.51) (Fig. 4). DISCUSSION Internal fixation remains the treatment of choice for intertrochanteric fractures, however some arthroplasty studies have demonstrated satisfactory results (e.g., mortality, risk of complications) 21) . Reduction quality is important during internal fixation, intramedullary reduction in commin- uted intertrochanteric fracture without posteromedial cortical support may lead to varus malpostion of the proximal fragment. Excessive blade sliding-defined as sliding more than 5 mm-may eventually cause treatment failure 22) . In the present study, the extent of blade sliding occurred more frequently in the intramedullary reduction group compared with the two other groups. In addition, blade slides was identified as a factor associated with mechanical complications. To prevent excessive blade sliding, reduction quality is essential. In elderly patients with severe osteoporosis specifically, it is exceedingly difficult to achieve appropriate reduction of comminuted fragments in posteromedial cortex. Therefore, if anteromedial cortex is not reduced properly, treatment failure is likely 23,24) . Yoon et al. 22) emphasized the importance of achieving continuity of the medial and anterior cortical line in anteroposterior and axial images intraoperatively (Fig. 5). Although the number of conversion arthroplasties included in this study was too small to identify statistical significance, all conversion arthroplasty occurred in the intramedullary group because cut-out eventually occurred due to excessive sliding in this group. On the other hand, blade exchange alone did not cause further sliding and bone union was achieved in the anatomical and extramedullary reduction groups; if blade sliding did occur in these groups, the blade was not able to slide excessively after the cortical apposi-tion. However, blade sliding was not blocked by cortical apposition in the intramedullary reduction group, and after that, cut-out occurred following varus malposition and rotation of proximal fragment. A previous biomechanical study demonstrated results similar to those in the extamedullary reduction group which had better resistance against axial loading compared with the intramedullary reduction group 17) . Implant design has been developing to prevent excessive sliding-namely sliding that adversely affects clinical results. Although this study only determined the results associated with blade-type cephalomedullary nails, Gamma nail (Stryker Trauma, Schoenkirchen, Germany), which is the lag-type screw cephalomedullary nail, early versions were associated with higher rates of cut-out compared with dynamic hip screw 25) . Gamma-3 nails, a third-generation version, employ a U-blade to withstand varus and rotational deforming force. Comparing the Gamma 3 nail without U-blade, U-blade significantly prevented lag screw sliding 26) . After all, surgeons must try to reduce sliding through appropriate reduction and implant design because the extent of blade or lag screw sliding is a risk factor for treatment failure in this study. The present study had several limitations. First, the number of cases was small because the rate of follow-up loss and mortality in elderly patients with hip fractures were relatively high. Second, there might be selection bias because only patients followed-up for a minimum of 6 months were included. For this reason, the rate of treatment failure and mechanical complication was overestimated and higher than previous studies. Third, the reliability of radiologic and clinical outcomes as related to blade sliding (e.g., trochanter pain, limping) were not evaluated. Nevertheless, the extent of blade sliding was more accurately assessed using the adjusted Doppelt's method for intramedullary nailing. In addition, the extent of sliding was deemed an important factor impacting treatment outcomes. CONCLUSION The extent of sliding was significantly different depending on reduction quality, a factor associated with mechanical complications. Preventing excessive sliding through proper intraoperative reduction is important to achieve satisfactory treatment outcomes.
2021-09-23T05:14:10.780Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "68828c022cfaa98ce5e74df4c6cb27d70554382c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5371/hp.2021.33.3.154", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68828c022cfaa98ce5e74df4c6cb27d70554382c", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
16496248
pes2o/s2orc
v3-fos-license
Structural Alterations of the Social Brain: A Comparison between Schizophrenia and Autism Autism spectrum disorder and schizophrenia share a substantial number of etiologic and phenotypic characteristics. Still, no direct comparison of both disorders has been performed to identify differences and commonalities in brain structure. In this voxel based morphometry study, 34 patients with autism spectrum disorder, 21 patients with schizophrenia and 26 typically developed control subjects were included to identify global and regional brain volume alterations. No global gray matter or white matter differences were found between groups. In regional data, patients with autism spectrum disorder compared to typically developed control subjects showed smaller gray matter volume in the amygdala, insula, and anterior medial prefrontal cortex. Compared to patients with schizophrenia, patients with autism spectrum disorder displayed smaller gray matter volume in the left insula. Disorder specific positive correlations were found between mentalizing ability and left amygdala volume in autism spectrum disorder, and hallucinatory behavior and insula volume in schizophrenia. Results suggest the involvement of social brain areas in both disorders. Further studies are needed to replicate these findings and to quantify the amount of distinct and overlapping neural correlates in autism spectrum disorder and schizophrenia. Introduction Autism spectrum disorder (ASD) and schizophrenia (SCZ) are biologically based psychiatric disorders that share a substantial number of etiologic factors and phenotypic characteristics. For instance, rare and partly overlapping copy number variants have been identified to be a strong genetic risk factor for both disorders [1], and relatives of individuals with ASD are more likely to have a family history of SCZ [2]. Both disorders are influenced by deficits of the social brain [2,3], a specialized neural network dedicated to social cognition comprising in particular the medial prefrontal cortex (MPFC), the posterior temporal sulcus, and the adjacent temporo-parietal junction, the anterior cingulate cortex (ACC), the insula, the amygdala, the inferior frontal gyrus, and the interparietal sulcus [4,5]. Social cognition refers to psychological processes that benefit social exchanges, in particular, a specific cognitive ability, called ''Theory of mind'' (ToM) or mentalizing, allows humans to explain and predict the behavior of conspecifics by inferring their mental states [6]. FMRI studies have shown aberrant activation in SCZ and ASD using mentalizing and basic emotional tasks. In SCZ aberrant neural activation in fronto-temporo-parietal regions and in amygdala were found [7][8][9][10][11][12][13]. Also, in ASD reduced activation in regions of the social brain during processing of social information has been described in the right pSTS, amygdala and fusiform gyrus [14][15][16][17][18][19][20][21]. Moreover, in both disorders abnormalities in global brain volume measures have been reported when compared to typically developing subjects (TD). In ASD, a greater total brain volume is present mainly in early childhood, rarely in adults [22]. In SCZ, smaller global GM and WM volumes have been reported in metaanalytic studies [23][24][25]. Meta-analyses of voxel-based morphometry (VBM) studies reported volume alterations of social brain areas in both disorders. In ASD smaller grey matter (GM) volumes were found in the temporal lobe, MPFC, amygdala/hippocampus, and precuneus [26,22,27], instead larger GM volumes have been reported in the lateral prefrontal cortex and temporo-occipital regions [26,22,27]. Structural alterations in ASD seem to be agerelated, as in adults with ASD structural alterations in fusiform gyrus, cingulum, amygdala and insula were less often reported compared to children/adolescents [22,27,1]. Similar to ASD, in SCZ structural alterations have been found in the social brain and meta-analyses have described smaller GM volumes in frontotemporal regions, ACC, hippocampus/amygdala, and the insula [28][29][30][31]23]. GM alterations were more extensive in patients with long illness duration possibly indicating a neurodegenerative process [29,30,32,28]. A recent meta-analysis implemented anatomical likelihood estimation (ALE) to compare VBM studies on both SCZ and ASD [33]. Lower GM volumes in the limbicstriato-thalamic circuitry compared to controls were found as a structural overlap between SCZ and ASD. Distinct volumetric alterations were observed in the amygdala, caudate, frontal, and medial gyrus (SCZ) and putamen (ASD). In summary, studies on structural alterations in ASD and SCZ compared to TD have shown GM volume alteration in social brain areas with a high diversity of brain volume changes within the social brain network, which are in part contradictory. The contradictions may be explained by the phenotypic differences within disorders, age and IQ effects, or by different methods used for data acquisition and analysis [32,31,27]. Therefore, a direct comparison of both disorders within a unitary methodological framework is necessary to clearly describe overlapping and disorder specific alterations of brain volume [34]. In the present VBM-study, ASD, SCZ, and TD were compared directly with respect to global and regional brain volume. Moreover, structural alterations were correlated with differences in mentalizing abilities and disorder specific symptom severity. This study aims to address three hypotheses: First, in global brain measures, a slightly lower total GM and total white matter volume is expected in SCZ compared to TD. No differences in global brain measures are expected between ASD and TD, as greater total GM volume was found mainly in ASD children, but not in adults. Second, we hypothesize that in both disorders compared to TD, lower GM volumes are present social brain areas, reflecting impairments in social cognition. As ASD and SCZ show different and in part contrary deficits in social cognition (i.e. in ASD the ability to attribute mental states to others is deficient, while in paranoid SCZ these attributions are intensified), we expect distinct volume alterations comparing SCZ with ASD. Third, associations between the extent of volume alterations in these areas and behavioral data (symptom severity and mentalizing abilities) are expected. Participants Thirty-four patients with ASD (3 females, age range 14 to 33 years, mean age 19.06, SD 5.12; mean IQ 105.73, SD 12.92), 21 patients with SCZ (5 females, aged 14 to 33 years, mean age 24.67, SD 5.20; mean IQ 103.33, SD 11.21), and 26 TD (4 females, aged 14 to 27 years, mean age 19.54, SD 3.46; mean IQ 107.75, SD 11.97) were investigated. The ASD sample consisted of 16 patients with Asperger Syndrome, 11 patients with childhood autism and 7 with atypical autism. The study was approved by the local ethical committee of the medical faculty, JW Goethe-University, Frankfurt am Main, and was carried out according to the declaration of Helsinki. All subjects (and their parents in case of underage subjects) gave their written informed consent for participation in the study. To exclude additional psychiatric disorders, all subjects were explored by experienced psychiatrists. In addition, participants filled in the youth or young adult self-report (YASR/YSR; [40,41]), a screening instrument to assess self-rated psychopathology. Also, the child behavior checklist/young adult behavior checklist (CBCL/YABCL; [42,41]) was completed by a parent, where possible. All TD subjects showed YASR/YSR and CBCL/ YABCL subscales T-scores ,67 (below borderline clinical cut-off). SCZ were older than ASD and TD. Therefore, age was controlled as a covariate in all further analyses. Psychological assessment IQ was measured by the Raven Standard Progressive Matrices (SPM, [43]). Handedness was determined by the Edinburgh handedness inventory [44]. To characterize deficits in mentalizing abilities, all subjects were investigated using the 'reading the mind in the eyes' test (RME) [45]. In this test, participants were asked to identify complex emotional and non-affective states in images presenting human eyes. Data acquisition and voxel-based-morphometry analysis Structural magnetic resonance images were acquired using a 3 Tesla Siemens Allegra scanner (Erlangen, Germany) with a 1channel head coil. Data were recorded using a T1-weighted MDEFT-sequence [47] with parameters as follows: TR: 10.55 ms, TE: 3.06 ms; TI: 680 ms; flip angle: 22u. One dataset consisted of 176 axial images with an inplane resolution of 1 mm 3 , field of view: 256 mm; slice thickness: 1 mm. Imaging data pre-processing All datasets were manually reviewed for head motion and image quality. Datasets with low image quality or motion artefacts were excluded from analysis. Structural images underwent pre-processing using optimized VBM as implemented in SPM8 (http://www. fil.ion.ucl.ac.uk/spm/software/spm8) according to the standardized procedure [48]. Images were segmented into GM and white matter (WM). Global GM, global WM, and total brain volume (TBV) were calculated. Subsequently, images were normalized by creating a DARTEL-template to provide an increased accuracy of inter-subject alignment. The images were resampled to 1.561.561.5 mm 3 voxel size. Images were smoothed (using an 8 mm FWHM isotropic Gaussian kernel) and normalized to MNI space. Data were corrected for differences in global GM/WM volume. As treatment with neuroleptic medication differed between groups, daily chlorpromazine-equivalents were tested for association with local volume alterations in SCZ. Statistical analysis Total tissue volumes were calculated by summing the partial volume estimates multiplied by the voxel volume across the entire brain. Between-group differences in global brain measures were examined using an ANOVA with factor group including age as a covariate. Subsequent comparisons between means were performed using Bonferroni's post hoc test. Individual GM and WM segments were subjected to a voxel-wise multiple regression analysis. The differential effects of the three groups (ASD, SCZ and TD) were assessed within an ANCOVA model including age and global GM or global WM volume as covariates. ANCOVA was followed by between group comparisons. Significance threshold was set at P,0.001, uncorrected (K.20). Anatomical regions and denominations are reported according to the atlas of Talairach [49,50]. Coordinates are provided as maxima in given clusters according to the standard MNI-template. To identify brain abnormalities in ASD und SCZ associated with illness severity and social cognition, individual peak voxel data were extracted from the regions resulting from between groups comparison and associated with RME, ADOS, and PANSS using Pearson's correlation. Results reported as significant were defined as P,0.05 and r.0.4. Data were analysed using SPSS Statistics version 17.0. Voxel based morphometry As results did not survive a FWE/FDR correction, they are given on a P,0.001 level, uncorrected. A main effect of group was observed for GM volume in the left anterior insula (AI), left amygdala and occipital medial area (Table 2). No main effect of group was found for WM volume. Thus, between groups analysis was performed for GM only. In the contrast SCZ vs. ASD, GM volume in left AI was smaller in ASD ( Figure 2). In the contrast ASD vs. TD, smaller GM volumes were found in ASD in left amygdala, left AI and additionally in the anterior MPFC and right amygdala (Figure 2). In the medial occipital area, GM volume was larger in ASD. The contrast SCZ vs. TD did not reveal any significant GM alterations. Antipsychotic treatment must be considered as a potential bias on regional brain volume. No association was observed with amygdala, insula, or MPFC volume and chlorpromazine-equivalents. Correlations between volumetric and behavioral data In SCZ, a positive correlation was found between left AI GM volume and the PANSS hallucinatory behavior score (r = 0.56; P = 0.008). No correlation was found between GM alterations and illness duration. In ASD, a positive correlation was found between left amygdala volume and the RME-score (r = 0.41; P = 0.015) ( Figure 3). As one peripheral data point (Figure 3, data point (0.37/7)) was suspect being an outlier, a second analysis was performed with the remaining data. The correlation persisted on a trend level. Discussion To our knowledge, this is the first structural MRI study that compared ASD and SCZ directly. The following main findings were revealed: 1) No global GM or WM alterations were present between groups. 2) In VBM analysis, GM alterations were present mainly in regions associated with social cognition (amygdala, MPFC and insula). Direct comparison of both disorders demonstrated a smaller GM volume in the left anterior Insula (AI) in ASD compared to SCZ. ASD showed smaller local GM volumes compared to TD in the bilateral amygdala, left AI and anterior MPFC. No differences were found between SCZ and TD. 3) Alterations in GM volume correlated significantly with specific parameters of psychopathology and social abilities: hallucinatory Table 2. Summary of main effect results and between group analyses in MNI space on a p,0.001 level. behavior with AI volume in SCZ, and mentalizing abilities with amygdala volume in ASD. Global brain measures No alterations in global GM/WM were found between groups. Although meta-analytic studies in SCZ reported alterations in global brain parameters such as smaller global GM, global WM and TBV [23,51,52] original data reported contradicting results. Out of 32 cross-sectional studies, that were included in metaanalytic studies on first episode SCZ [24,53], only 7 studies reported significant results. As alterations are subtle [54], large samples are needed to detect these differences. Thus, it is not surprising, that in our study, no differences between groups regarding global brain measures were detected. In ASD compared to TD, previous studies reported larger GM, WM and TBV predominantly in children, but not in the adult population [22]. This is in accordance with results of this study. Voxel based morphometry 1. Direct comparison of ASD and SCZ. This study suggests a dysfunctional involvement of the AI in both disorders. ASD patients showed a smaller GM volume in the left AI compared to SCZ, whereas a positive correlation was found between insular GM volume in SCZ and PANSS hallucinatory behavior score. The AI is an extensively connected, multifaceted brain region that is involved in numerous brain functions [59]. This brain region is highly involved in processing sensory stimuli with a unique role in interoception, monitoring the physiological reaction like heartbeat frequency, skin conductance, pain and touch [60][61][62][63] and also the emotional component of interoceptive awareness [64,65]. The interceptive awareness of the body as an independent entity that is distinct from external environment, is a precondition for self and non-self discrimination [66,59]. Furthermore, AI has been implicated in empathic abilities [67,68,62] and is involved in auditory and facial affect processing [69][70][71]. One theory, the salience network hypothesis [72][73][74] conceptualizes the disparate AI functions using a comprehensive ''network perspective''. According to this model, the AI is the integral hub of the salience network, and represents a multimodal salience detector that identifies the most relevant among several internal and external stimuli. It is associated with segregate functions (i.e. processing of sensory information like pain, social cues like facial expressions, and the analysis of own emotional state) which all facilitate subjectively relevant information. A salience network dysfunction has been proposed to play a major role in ASD and SCZ [72,[75][76][77][78]. In ASD the smaller insular GM volume found in this study fits well with AI hypoactivity reported by fMRI studies on tasks of social cognition [58]. A structural insular abnormality might lead to a disconnection between the AI and sensory and limbic structures, resulting in limited ability to identify salient stimuli necessary for adapting adequately to the social environment [72]. Due to this dysfunction, social cues might not be identified as salient and thus not labelled as emotionally rewarding in the insula. In contrast, sensory stimuli might be considered as salient, which may underlie sensory interests, repetitive behavior, or even anxiety and avoidance in ASD. Studies on high-risk individuals show, that smaller GM volume of ACC and Insula, both major regions of the salience network, are associated with transition to psychosis [81,82]. Salience network dysfunction has been proposed to play a key role in positive and negative symptoms in SCZ [75,83,77]. In this study, we found a significant relation between insula volume and PANSS hallucinatory behavior score. Hallucinations can be conceptionalized as a failure to differentiate an internally generated from an externally sensory experience [84,85]. Functional and structural studies revealed that the insula, along with regions traditionally known as language areas, seem to play a key role in auditory hallucinations in SCZ [86][87][88][89][90][91][92]: A Insula dysfunction might cause a confusion of the two sources, resulting in internal sensory information being attributed to external sources [93]. Audible thoughts, thought insertions, and hallucinations might be the consequence. Still, a positive correlation between insula volume and PANSS hallucinatory behaviour score was surprising as it stays in contrast to reports of insular GM volume loss in SCZ [29]. Further studies are needed to investigate significance of these alterations in more detail and to resolve the mentioned inconsistencies. 2. Direct comparison of ASD and TD. The ASD group was characterized by significantly smaller GM volumes in bilateral amygdala, left AI and anterior MPFC compared to TD. Hence, only areas involved in social cognition were significantly smaller. Smaller GM volume in the amygdala-hippocampal complex represents a common finding in VBM-studies in ASD [53,[94][95][96][97]. The amygdala is involved in emotion recognition and affective Theory of Mind and is thus a key player in social cognition [98]. The association of social impairment and abnormalities of the amygdala have been reported across many different scientific fields [99][100][101]. In a meta-analysis of functional neuroimaging studies in ASD, hypoactivation of the amygdala has been found in different social cognitive tasks [102,103]. Because a smaller cortical GM volume has been frequently associated with a reduced function in the affected structure [104], the smaller GM volume in the amygdala in ASD fits well to the lower amygdala activation in functional neuroimaging studies. This is supported by the positive correlation we found between mentalizing abilities and amygdala volume in ASD. In agreement with results of this study, Mc Alonan found a significantly smaller insular GM volume in ASD [105]. Greater insular surface area is associated with poorer social behaviour in ASD [106] and a meta-analysis of functional MRI studies examining social processing identified the AI as a consistent locus of hypo-activity in autism [58]. Altered functional connectivity of AI with Amygdala and somatosensory regions were also present in ASD [107]. These structural and functional findings could underlie typical ASD symptoms like altered emotional experiences and impaired social abilities. In MPFC, we found a smaller GM volume in ASD. According to a functional division of the MPFC this area is located in the anterior MPFC [108]. The MPFC, in particular its anterior part, is a key region of mentalizing abilities. It is involved during communicative intention [109][110][111][112] and triadic interactions [113]{Amodio 2006 #141 [114,115]. Different lines of evidence imply alterations of MPFC in ASD. For example, injuries of the MPFC have led to deficient mentalizing abilities and autism-like behavior [116]; histologic abnormalities in this region have been found in animal models of autism [117,118], and in postmortemstudies of ASD patients [119]. Different neuroimaging techniques have revealed a reduced activity in the MPFC in ASD patients using MEG [120], PET [121], and functional MRI during Theory of Mind tasks [122,14]. The reduced activity in the MPFC corresponds well with the smaller GM volume across ages described in previous meta-analyses [27]. This study provides an additional confirmation for this finding. 3. Direct comparison of SCZ and TD. No structural alterations were found in SCZ compared to TD. This is in disagreement with our hypothesis, as brain volume alterations were expected in areas encoding social cognition in SCZ. Smaller regional brain volumes were consistently reported in meta-analyses [55,31,29]. However, these results are highly heterogeneous, as age of onset and illness duration influence results of VBM analyses causing different alterations in regional GM volume [28,31]. In this study, three aspects might explain the lack of observed differences between SCZ and TD: First, SCZ comprised the smallest group of participants. Thus, the study has lacked power to find smaller volumetric changes [123], which must be considered as a limitation. Second, with a mean age of 24.7 years (range 14 to 33 years) the SCZ patients in our study were relatively young. In SCZ, GM alterations are more pronounced in samples with rising illness duration and age [51]. Third, the examined population was a mixed sample of adolescent-onset SCZ and adult-onset SCZ. A heterogeneous sample in terms of age of onset might have contributed to a lack of alterations in GM volume in this contrast. Study limitations Some aspects must be considered as limitations of this study. (1) The study was designed to detect only large volume differences; due to limited sample size and a lack of power, possibly existing smaller volume alterations were not detected. We can therefore only interpret the differences detected in our study but not the lack of differences. Results need to be considered preliminary, as they were reported given on a p,0.001 level and did not withstand a more stringent and conservative correction for multiple comparisons across the brain. (2) The SCZ sample was heterogeneous in regards of disease onset comprising both adolescent and adult onset schizophrenia. Conclusion In summary, this study compared ASD and SCZ patients directly for global and regional alterations in brain structure. Results emphasize distinct brain substrate in both disorders, as GM alterations of the social brain were prominent in ASD only. Still, we found that disturbances in the insula may play an important role in both conditions. As a morphologic correlate, this study revealed a lower insular GM volume in ASD; in SCZ individuals, a positive correlation of insular GM volume and PANSS hallucinatory behavior score was found, which contrasts to smaller insular volume frequently reported in SCZ. Further studies are needed to replicate the described findings and investigate neurophysiological significance of these alterations for ASD and SCZ in more detail.
2017-06-17T09:26:31.305Z
2014-09-04T00:00:00.000
{ "year": 2014, "sha1": "89c2ed5ecf8bde25234801688a830bd0a14485d9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0106539&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89c2ed5ecf8bde25234801688a830bd0a14485d9", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
236994810
pes2o/s2orc
v3-fos-license
Premenarchal Adolescent Female Ovarian Torsion: A Case of Delayed Diagnosis Patient: Female, 11-year-old Final Diagnosis: Ovarian Symptoms: Abdomenal • pain Medication: — Clinical Procedure: — Specialty: Surgery Objective: Mistake in diagnosis Background: Ovarian torsion is a rare surgical emergency in premenarchal girls. Early diagnosis and surgical detorsion are required to restore blood flow and limit tissue damage. Case Report: Here, we present a case of ovarian torsion and appendicitis in an 11-year-old premenarchal girl who presented to our emergency room with a 4-day history of right iliac fossa pain, limping, and fever. Upon initial evaluation in our hospital, her vital signs were stable and clinical examination revealed abdominal guarding and right lower quadrant rebound tenderness with positive Rovsing’s sign. Abdominal ultrasound and computed tomography scans showed adnexal cysts and torsion, an inflamed appendix, and free fluid in the abdomen. Intraoperative findings included a twisted gangrenous ovary and an edematous appendix. The patient underwent emergency laparoscopic oophorectomy and appendectomy. Conclusions: This case demonstrates that reactive appendicitis can occur secondary to inflammation of adjacent structures such as the ovary. Background Ovarian torsion is defined as a rare gynecologic emergency occurring as the ovary twists on its ligamentous support, which obstructs the blood flow to the ovaries. There is no specific diagnosis of this condition in the form of clinical presentation in pediatric cases. Moreover, there might be inconsistency or equivocality in diagnostic imaging through computed tomography, magnetic resonance imaging, and Doppler ultrasound [1]. The occurrence of ovarian torsion is rare among children and adolescents; however, it is important to consider pediatric females with abdominal pain. Approximately 2 out of 10 000 pediatric and adolescent populations are affected by this disease, which constitutes 15% of all torsion cases [2]. The occurrence of torsion is observed among pre-and post-menarchal girls. Adnexal torsion can affect otherwise healthy ovaries, known as torsion of the normal adnexa, or pathologic adnexa, which typically contains ovarian or para-ovarian cysts. Among the pediatric and adolescent populations, most of the torsion cases (40-84%) develop because of some ovarian pathology, whereas other cases demonstrate torsion of normal adnexa [3,4]. It is expected that torsion in post-menarchal teens is linked with functional ovarian cysts like the cysts of the corpus luteum [5]. The risk of recurrent torsion is associated with the underlying pathology that causes torsion. Previous studies have shown that 60% of the recurrence rate in pre-and post-menarchal females is associated with normal adnexa [6,7]. Moreover, these cases present a chance of preventing recurrent torsion by ovarian fixation of the ipsilateral and bilateral adnexa [8]. Here, we describe an uncommon case of a delayed diagnosis of ovarian torsion complicated by reactive appendicitis. Case Report An 11-year-old pre-pubertal girl with no past medical or surgical history presented to our ED with a 4-day history of right iliac fossa pain and limping associated with fever. Before this, she presented to the ED of another hospital on 2 occasions and was discharged after being diagnosed with suspected gastroenteritis or early acute appendicitis. Physical examination of the abdomen revealed guarding and right lower quadrant rebound tenderness with positive Rovsing's sign and Dunphy's sign. The laboratory results showed a white blood cell count of 12. in the pouch of Douglas. As the appendix was not visualized, she underwent a CT scan with intravenous contrast. This revealed an appendix with a thickened enhancing edematous wall in the right iliac fossa extending medially with its tip lying adjacent to the anterolateral aspect of the right ovarian complex cyst. The inflamed appendix measured about 10 mm in diameter and was associated with stranding and haziness of the mesenteric fat planes and moderate free fluid in the right iliac fossa, hepatorenal pouch, and pelvic region, suggestive of acute appendicitis with possible rupture. The right adnexa showed a cystic structure with septation and a hyperdense component as well as a thickened wall, measuring approximately 6.5×4.8 cm with endometrial fluid (Figures 1, 2). The patient underwent laparoscopic oophorectomy of the right ovary and appendectomy. Intraoperatively, there was a hemorrhagic twisted ovarian torsion. The wound was closed in layers using Vicryl 3/0 for the sheath and Vicryl rapid 4/0 for the skin. The wound was covered and the Foley catheter was removed. The total blood loss was 10 mL and the patient did not require a blood transfusion. The appendix and hemorrhagic torsion were sent for pathological analysis. Histopathological examination revealed marked congested and non-viable tissue with hemorrhagic infarction, and no residual ovarian tissue, cyst, or malignancy. The appendix was inflamed distally with full-thickness necrosis at the tip. The patient recovered without complications. Ethical consideration Consent was obtained from the parents of the patient before continuing with this case study. Discussion An infrequent cause of abdominal pain arises as the result of ovarian torsion among the pediatric female population. There is a rare occurrence of ovarian torsion with concomitant appendicitis. Physical exam ultrasonography, radiographs, and computed tomography are used as a diagnostic modality for these patients [9]. The outcomes seem to be good in patients who undergo appendectomy and oophorectomy with or without a salpingectomy. A cystic ovarian lesion is present in 5 out of 7 cases, while 1 out of 7 cases present with ovarian neoplasm in pathologic analysis. The present case is unique, with no identification of coexisting ovarian pathology. Previous studies have shown that sonography and computed tomography are widely used for the diagnosis of ovarian torsion [10,11]. The major ultrasonic signs of the development of ovarian torsion include multiple follicles in cortical portions of the ovary and pelvic masses that may be with or without fluid in the pouch of Douglas [12]. In the present case, there was no flow to the torsed ovary observed through duplex imaging. It is possible to differentiate between appendicitis and ovarian torsion through ultrasonography with color Doppler. The 2 main issues highlighted by the present case study are that reactive appendicitis can occur secondary to inflammation of adjacent structures such as the ovary, and that adnexal torsion should be considered in the differential diagnosis of reactive appendicitis, and a surgeon skilled in managing both conditions should be consulted. A similar case study by Al-Turki [13] described a delayed diagnosis of ovarian torsion in a patient presenting with vomiting and sudden onset of right iliac fossa pain. On initial examination, the patient was tender in the right iliac fossa but without rebound tenderness; therefore, an initial diagnosis of appendicitis was made. However, after 17 hours of delay, this case was successfully managed with detorsion of the ovary after clinical imaging revealed a normal appendix. This case demonstrated the overlapping clinical features of appendicitis and ovarian torsion. Another cause of a reproductive-age woman was presented by Callen et al [14], who presented with a history of subacute lower abdominal pain that became worse with physical activity, and a low-grade fever. Ultrasound revealed an edematous ovary but ovarian torsion was considered unlikely. However, magnetic resonance imaging (MRI) confirmed a twisted vascular pedicle and demonstrated a markedly dilated, hyper-enhancing appendix with extensive inflammatory changes involving the surrounding peritoneal fat as well as the small and large bowel. Abdominal examination revealed minimal periumbilical tenderness without any guarding or rebound tenderness. She consented to diagnostic laparoscopy for possible appendectomy; however, the intraoperative assessment revealed features of chronic appendicitis with dense adhesions precluding safe appendectomy. The ovary was edematous due to inflammation from chronic appendicitis without any torsion or ovarian mass. In conclusion, this was ovarian edema due to adjacent appendicitis [14]. A rare case of pediatric ovarian torsion in a premenarchal 13-year-old girl was reported by Rajwani and Mahomed [15]. This case presented with engorged massive edematous and hemorrhagic ovarian torsion treated with salpingo-oophorectomy. This case was similar to our own in having a clinical presentation with features of acute appendicitis and similar histopathology results. Adnexal torsion is most common among women aged 20-30 years [16]. Even though adnexal torsion is rare in premenarchal girls [15], it should be ruled out in patients presenting with signs and symptoms of an acute abdomen, such as those with suspected renal colic and acute appendicitis. Distinguishing between these 2 clinical entities may be challenging due to the overlapping signs and symptoms; therefore, ultrasound should be obtained to confirm the clinical diagnosis and prevent misdiagnosis. According to a previous study [13], young girls presenting with lower abdominal pain should undergo an ultrasound to rule out adnexal torsion and prevent potentially irreversible damage to the ovaries. In our case, ovarian torsion was evident on ultrasound; however, a previous study reported that ultrasound was insufficient to detect ovarian torsion in a woman of reproductive age [14]. However, our patient also underwent a CT scan to confirm the diagnosis, which highlights the need for further research to evaluate whether ultrasound should always be complemented by more detailed imaging modalities such as CT or MRI. The coincidence of ovarian pathology with appendicitis is rare; however, it can require surgery in some pediatric female patients. It is often possible to secure the torsioned ovary, but there is a need to perform ovarian sparing whenever possible. The present case reveals that ovarian torsion accompanied with appendicitis often requires salpingo-oophorectomy along with an appendectomy. Conclusions Ovarian torsion in pediatric patients has a similar clinical presentation and may be treated by laparoscopy in most cases. This present case study showed that reactive appendicitis can occur secondary to inflammation of adjacent structures such as the ovary. It is shown that there is an increased risk of torsion recurrence among premenarchal girls with torsion involving normal adnexa that should be followed accordingly. This case highlights the need for further studies to evaluate the sensitivity and specificity of ultrasound alone in detecting complicated adnexal torsion in adolescent girls. Name of Department and Institution Where Work Was Done Emergency Department, King Abdullah bin Abdulaziz University Hospital, Riyadh, Saudi Arabia.
2021-08-12T23:10:45.605Z
2021-08-16T00:00:00.000
{ "year": 2021, "sha1": "1f0f65c4b5474b9819fc7e3c866b62910794f764", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8378778", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "10ce86566afb773b54a6a8d1c23d995539c60c61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119229453
pes2o/s2orc
v3-fos-license
Science with the Virtual Observatory: the AstroGrid VO Desktop We introduce a general range of science drivers for using the Virtual Observatory (VO) and identify some common aspects to these as well as the advantages of VO data access. We then illustrate the use of existing VO tools to tackle multi wavelength science problems. We demonstrate the ease of multi mission data access using the VOExplorer resource browser, as provided by AstroGrid (http://www.astrogrid.org) and show how to pass the various results into any VO enabled tool such as TopCat for catalogue correlation. VOExplorer offers a powerful data-centric visualisation for browsing and filtering the entire VO registry using an iTunes type interface. This allows the user to bookmark their own personalised lists of resources and to run tasks on the selected resources as desired. We introduce an example of how more advanced querying can be performed to access existing X-ray cluster of galaxies catalogues and then select extended only X-ray sources as candidate clusters of galaxies in the 2XMMi catalogue. Finally we introduce scripted access to VO resources using python with AstroGrid and demonstrate how the user can pass on the results of such a search and correlate with e.g. optical datasets such as Sloan. Hence we illustrate the power of enabling large scale data mining of multi wavelength resources in an easily reproducible way using the VO. SCIENCE DRIVERS AND THE ADVANTAGES OF VO ACCESS For many years one of the primary aims of the Virtual Observatory movement has been to better enable multi wavelength astronomy. Example science drivers include finding all the information on a given set of objects which may be defined by positions, colours or morphology. It is common to study not just the resultant correlations but to identify the so-called "outlier" objects, which have unusual properties and which may offer new insight to the astrophysical processes driving the observed emission. Another requirement may be to build a spectral energy distribution from multi archive data while accounting for instrumental, sensitivity and aperture effects. A longer term goal is to compare such correlations on-the-fly with numerical simulations as the latter become available in a more standardised way. Common aspects to the above use cases include the requirement to browse, search and manage large amounts of distributed heterogeneous data. The astronomer then wants to be able to combine multi wavelength data taking into account differences in units and photometric systems; spatial, wavelength and time coverage; resolution, point spread functions and observing techniques. We set ourselves a challenging target! The advantages of VO data access in addressing these challenges is to make accessing multi wavelength data easier in that you do not need to access multiple interfaces but can access data from one single entry point. Furthermore the user can then build workflows, i.e. pieces of code which run on the server and are reusable. The range and complexity of datasets and resources published to the VO is rapidly increasing. These resources are heterogeneous -and are published through various standard interfaces allowing access to images, catalogues, spectra, transient event data, tool interfaces and so on. Providing access to these resources is a success of the VO movement, where effective use of newly emerging publishing standards as provided by the International Virtual Observatory Alliance (IVOA, http://www.ivoa.net) has been made by the astronomy community. Information about each resource published to the VO is entered into a top level, continually updating "registry", this providing in effect a record of where and what the resource is. With the advent of these many resources available through the VO, an emerging challenge is how to offer the astronomer a reliable and usable means to search, retrieve and visualise the relevant data and resources to meet the needs of their particular science problem. INTRODUCING THE ASTROGRID VO DESK-TOP In April 2008, the UK AstroGrid VO project made the first public release of the VO Desktop suite of applications which consists of several interlinked tools including the VOExplorer resource browser, File Explorer, Task Runner and Query Builder. You can search for resources and data in the VO; bookmark your favourites; fetch images, spectra, light curves and catalogues; run queries on databases; save and share files in VOSpace; and invoke remote applications. VO Desktop also runs background software called "Astro Runtime" which handles all interactions with the remote services. VOExplorer offers a powerful data-centric visualisation for browsing and filtering the entire VO registry using a familiar "iTunes" type interface. The Registry stores information about Resources all round the world. A "Resource" could mean a data collection like UKIDSS, or an application like Sextractor that you can invoke as a remote service, or just information about an organisation. Data collection resources will usually have one or more "capabilities", i.e. ways of accessing the data, like an image cut out service, or a catalogue conesearch, or a full query-language (ADQL) interface. Each registry entry contains metadata describing the resource -basically a set of standard attribute=value pairs. This information will tell you whether the resource is a catalogue or an image atlas, whether it has infrared or X-ray data, as well as who curates the data. To locate particular object(s) of interest, say, you need to send a query to the resource itself. Note that the Registry is maintained by AstroGrid, but the information in it is provided by the resource owners, not by the AstroGrid project. This allows you to bookmark your own personalised lists of resources for repeat use, filter using any available metadata and then to run tasks on the selected resources as desired. We now illustrate the use of the AstroGrid software to tackle a typical multi wavelength science case involving X-ray selected clusters of galaxies at different levels of complexity and show how data or search and correlation results may be saved to remote or local storage as well as passed to any suitably enabled VO tool for further visualisation and analysis. VO DESKTOP IN ACTION: X-RAY CLUSTERS OF GALAXIES In Fig 1 we show a screenshot of the VOExplorer resource search interface which allows a user to build a set of AND, OR conditions. This example is a simple search for resources containing both "X-ray" in the waveband AND requiring the subject contains the word "cluster". A list of resources is then returned and the search may now be refined further by means of metadata filter wheels acting on content, coverage or other resource type. In Fig 2 the resultant resources that satisfy . Advanced querying including pre-filtering on catalogue parameters: select an ADQL query interface to a 2XMMi X-ray catalogue resource using VOExplorer, then identify columns in the catalogue corresponding to "extent". The user may then construct a simple but powerful query against these parameters in the "Edit" pane using a standard "ADQL" query language and here extract all extended sources (which the user has defined as having extent radius greater than 6.0 arcseconds) to create a new science sample in VOTable format for further visualisation and analysis. Figure 5. Multi mission science archive interfaces: modern astronomers are currently faced with multiple search interfaces for each different mission they wish to search via project webpages. While the expert information provided in each is often essential, each different interface is likely to have it's own terminology and a different flavour of SQL querying. Hence the user ends up searching each one separately and stitching the results back together later. But using the VO to perform standardised ADQL (Figure 4) or more advanced python scripting queries ( Figure 6) offers a more efficient solution. Figure 6. VO scripting using python: with the AstroGrid VODesktop running, the user may build advanced scripts in python in order to perform more complex multi mission, multi parameter searches including cross correlation and advanced data mining techniques. In this example the user executes cone searches to NED and 2MASS around a particular object position and then cross matches the resultant outputs before saving to local disk. this condition are displayed following a further filtering to any resource including the keyword "XMM" denoting the XMM-Newton satellite. On selecting the XMM-LSS catalogue of X-ray selected clusters of galaxies (Pierre et al 2007), relevant "Actions" available for the selected resource are listed including a positional search facility "Query" which launches the AstroScope search tool and also a "Multi Query" may be launched for a list of positions to be supplied by the user (Figure 3). The output can then be passed directly into any VO tool also running such as TopCat or saved to disk. Similarly for solar datasets or transient event data one can launch the Helio-Scope or VOScope tool on selected datasets to search by time interval. As an example of a more advanced task, MORE ADVANCED QUERYING AND DATA MINING: CROSS MATCHING 2XMM AND SDSS Typically, the modern multi wavelength astronomer has to access many different mission archives in order to select samples from each using a number of individual project webpages ( Figure 5). Of course the expert information provided for each catalogue or data product for each differnet mission under study is usually essential in order to fully understand the criteria before a full scientific analysis. But each different interface is also likely to have it's own terminology and a particular flavour of SQL querying also. Hence even to perform simple cross matching involves searching each one separately and then stitching back together each of the outputs afterwards. This can be a non-trivial task. However, ADQL searches and in particular more advanced cross matching to other archives as well as other complex tasks may now be performed using a common schema in python with AstroGrid. The user may run commands directly from the command line as long as the AstroGrid Runtime capability is running in the background. Figure 6 shows a reasonably simple example chosen from a number of template python scripts for the VO, as provided by AstroGrid. In it, the user executes cone searches to NED and 2MASS around a particular individual object position and then cross matches the resultant outputs before saving to local disk. Now, by selecting an X-ray extended source sample from the 2XMM serendipitous source catalogue, as described previously using ADQL or python scripting, and then correlating the results with the Sloan Digital Sky Survey (SDSS) optical catalogues, e.g. Adelman-McCarthy et al (2008), one could derive an averaged photometric redshift for each Xray selected galaxy cluster candidate and then repeat for the entire sample and compare to any catalogued spectroscopic redshifts. Figure 7 shows an example of the fruits of this kind of work for one such XMM cluster candidate by Lamer et al (in prep) and now made possible for large samples using the VO in a fraction of the time. Finally, once a science sample is made the user is offered the opportunity to process and act on those data sets. For this purpose a range of data visualisation and analytical tools, often building on previously existing tools developed over many years, are available via the PLASTIC/SAMP messaging protocol. Popular tools include TopCat for catalogues, Aladin and Gaia for images, SPLAT and VOSpec for spectra. These are described in more detail elsewhere in these proceedings.
2009-06-08T16:40:21.000Z
2009-06-08T00:00:00.000
{ "year": 2009, "sha1": "df5b75388ddd42ab20a8d33df860534bc22aaf4b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "df5b75388ddd42ab20a8d33df860534bc22aaf4b", "s2fieldsofstudy": [ "Computer Science", "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
40002320
pes2o/s2orc
v3-fos-license
Decision Making in Children with Attention-Deficit / Hyperactivity Disorder Background: Informed consent forms and clinical study participation explanations contain many specialized words including medical terms that are difficult to understand. The difficulty is particularly obvious for children with developmental disorders who show attention or similar problems. This study quantitatively evaluated the decision-making ability of these children using the Wechsler Intelligence Scale for Children-III (WISC-III) as a preliminary study for a multi-faceted investigation that would also use physiological indices. Methods: Participants were 11 children with Attention Deficit/Hyperactivity Disorder (AD/HD). The WISC-III was used for quantitative evaluation of their decision-making ability. Results of intelligence quotients (IQs), group indices, and subtest scores were analyzed. Results: The mean Performance IQ was four points lower than the mean Verbal IQ. The mean score for the Processing Speed index was lower by more than one standard deviation (SD). The mean scores for the Coding and Object Assembly subtests were lower by more than two SDs. Conclusion: The WISC-III results for IQ and group indices suggested the efficacy of auditory explanations. In addition, the subtest results suggested the necessity to pay sufficient attention to risk-benefit weighting in explanations. These findings suggested that the decision-making ability of children with AD/HD could be assessed using the WISC-III. Introduction Informed Consent (IC) must be obtained from participants or their parents to be enrolled in a clinical study.In a * Corresponding author.clinical study on children, informed assent (IA) must be obtained from the child in addition to consent from the parent. Therefore, the decision-making ability of the participant must be evaluated in advance.However, evaluation of decision-making ability is a challenge when the participants are children with disorders. Recently, numerous studies have been conducted on developmental disorders; some reported that children with attention deficit/hyperactivity disorder (AD/HD) showed specific behavioral problems including inattentiveness, impulsiveness, and hyperactivity.In addition, previous studies suggested that these children had only restricted rationality with a limited range of cognition/reasoning; thus, their decision-making capacities for IC/ IA may be low [1] [2]. In healthy individuals, cognitive ability does not directly correspond to decision-making ability.However, in patients with the above disorders, evaluation of cognitive ability is expected to provide effective information in determining decision-making ability level. The Wechsler Intelligence Scale for Children-III (WISC-III) is a representative test for evaluating cognitive ability.Cognitive ability assessed by the WISC-III is defined as the combination of the participant's individual abilities to act intentionally and think rationally in the environment and to deal effectively with it [3].Some previous case-based studies have investigated cognitive characteristics of children with developmental disorders using the WISC-III intelligence quotients (IQ) or group indices [4] [5].However, to date, no groupbased study involving the WISC-III subtests has been reported on the quantitative characteristics of decisionmaking ability for IC/IA of children with developmental or mental disorders. This study investigated decision-making ability for IA quantitatively using the WISC-III subtests. Participants The participants were those diagnosed with AD/HD based on the Diagnostic and Statistical Manual of Mental Disorders Fourth Edition (DSM-IV-TR), at the National Center Hospital, Neurology and Psychiatry's pediatric outpatient clinic, who provided written informed consent for study participation after receiving explanations between 2008 and 2013.One patient was taking methylphenidate and atomoxetine.Sample size was determined based on previous studies in children with AD/HD [1]. The ethics board of the National Center of Neurology and Psychiatry reviewed and approved this study's methods. Methods WISC-III intelligence quotients (IQ) and the WISC-III subtest were used.WISC-III mean scores of typically developing children were used as the control to compare the standard deviation.Therefore, the standard value of IQ and the group index was 100 (SD = 15) and the standard value for subtests was 10 (SD = 1.5). Results Age range was 9.64 ± 2.14 years.Detailed participant profiles are shown in Table 1.The mean IQs and group indices of all participants are summarized in Figure 1.The mean scores were 84.8 ± 19.4 for Full-scale IQ, 88.3 ± 18.0 for Verbal IQ, and 84.4 ± 19.8 for Performance IQ.The mean Performance IQ was four points lower than the Verbal IQ. The group indices means were 90.4 ± 18.3 for Verbal Comprehension, 86.9 ± 18.6 for Perceptual Organization, 86.5 ± 10.4 for Freedom from Distractibility, and 84.0 ± 21.5 for Processing Speed.In the comparison of the four group indices, the mean Processing Speed IQ was the lowest, or lower than one SD. The mean subtest scores of all participants are summarized in Figure 2. The scores were 9.2 ± 3.1 for Picture Completion, 8.0 ± 4.1 for Information, 6.9 ± 3.4 for Coding, 8.3 ± 3.3 for Similarities, 9.0 ± 4.6 for Picture Arrangement, 7.1 ± 3.4 for Arithmetic, 8.0 ± 3.7 for Block Design, 9.1 ± 4.1 for Vocabulary, 5.8 ± 3.1 for Object Assembly, 8.3 ± 3.2 for Comprehension, 7.4 ± 4.5 for Symbol Search, 8.4 ± 2.2 for Digit Span, and 8.0 ± 3.0 for Mazes.In particular, the mean scores for Coding and Object Assembly were lower by more than two SDs.Correlation was analyzed by Pearson's correlational coefficient between the age and the scores of Coding and Object Assembly, respectively.The result suggested that the tendency remained unchanged even in higher ages and Discussion This study investigated decision-making ability in children with AD/HD using WISC-III tests.The principal symptoms of children with AD/HD are inattention, hyperactivity, and impulsivity [6], which may suggest their decision-making ability tends to be lower than that of the typically developing children.The tendency may be more obvious in understanding written or oral explanation or content, particularly in an unfamiliar situation.Complex medical terms and other expressions used in informed consent forms sometimes pose a difficulty in understanding.One such example was reported in a clinical study enrolling 287 adult cancer patients; 70% of the participants did not understand that the test drug had not yet been proven effective [7].However, another study reported that understanding and satisfaction were improved by shortening or simplifying the texts.Therefore, the participants may be able to understand the explanation sufficiently to give their consent or agreement, if their decision-making ability is validly assessed and IC/AC forms are customized based on the evaluation [8]. Characteristics Indicated by IQ Results In this study, the average IQ scores were within two SDs for the Full-scale, Verbal, and Performance scores; this indicated that all participants showed normal range of intelligence.However, the mean performance IQ score was lower by four points than that of verbal IQ.Verbal IQ assessed with the WISC-III indicates that the comprehension faculty of verbal information is through the ear; therefore, it is related closely to auditory cognition [9].It is also an index of the auditory/sound processing faculty closely related to crystallized intelligence including judgment or habits obtained from past learning experiences [4].On the other hand, Performance IQ is an index for the visual/motor processing faculty closely related to fluid intelligence, facilitating adaptation to a new environment [4].If these findings are applied to the IA explanation procedure, the superiority of verbal IQ scores observed in this study suggests the effectiveness of auditory support in presenting the IA form.Specifically, reading the text aloud, responding to questions orally, or similar procedures should be effective.While the participants in this study were children with AD/HD, there was a previous study with patients with pervasive developmental disorder reporting Verbal IQ less than Performance IQ as a cognitive characteristic.Therefore, the IA presentation method should be adjusted depending on the type of disorder [10]. Characteristics Indicated by Group Index An analysis of the group index results showed that the score for Processing Speed was significantly low; this may be closely related to the low Performance IQ score.Processing Speed is related to the ability to process a large volume of visual information correctly without feeling [4].As suggested in the IQ characteristics analysis, this finding also indicated the efficacy of visual support. Characteristics Indicated by Subtests Among the subtest items, scores for Coding and Object Assembly were extremely low.This tendency remains unchanged even in higher ages, suggesting persistent difficulty after the infant stage.Coding is a task under the Processing Speed index that consists of transcribing a geometric figure (Code A) or number (Code B) and the paired simple symbol.The task is supposedly related to abilities including following orders, action agility, speed, and accuracy of administrative work, and visual short-term memory [3] [4].Low scores for Coding suggest that the participant is poor at monotonous visual tasks and this finding corresponds to a characteristic of individuals with AD/HD.Therefore, it is not effective to simply show them an explanation document full of text at one time; procedures are necessary to avoid monotony such as reading the text by block, receiving interspersed questions, and confirming the content again after explanation. Object Assembly is a subtask under the Perceptual Organization index, where pieces are presented in a specific arrangement and the subject is asked to combine the pieces to complete a specific form.It is supposedly related to the abilities to use sensorimotor feedback, forecast correlations among parts, think flexibly, or perceive the whole from the parts [3] [4] [11].Applying these findings to IA explanation, appropriate risk-benefit weighting of the study, is difficult [2].Therefore, sufficient explanation should be provided to the participant without underestimating the risk in the study. Limitations The study was based on the results of the Japanese version of the WISC-III.The Japanese version of the WISC-IV was published in 2010.Therefore, new studies using the more recent version will be required.In addition, a greater number of participants are required for a more detailed study on the impact of factors, including growth.Further studies are also necessary that focus on other disorder groups as well as on relationships with cerebral R M: Male, F: Female, R: Right, L: Left. Figure 1 . Figure 1.Mean IQs and group indices of all participants. Figure 2 . Figure 2. Mean subtest scores of all participants.
2017-09-23T05:56:02.666Z
2016-05-20T00:00:00.000
{ "year": 2016, "sha1": "7d9aea5f3e8c396f717e2a482f783f192ba71049", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=67120", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "21f0fdf2575946e4e0628c84787c0369b943fd7a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
257088534
pes2o/s2orc
v3-fos-license
Bio-behavioral synchrony is a potential mechanism for mate selection in humans The decision with whom to form a romantic bond is of great importance, yet the biological or behavioral mechanisms underlying this selective process in humans are largely unknown. Classic evolutionary theories of mate selection emphasize immediate and static features such as physical appearance and fertility. However, they do not explain how initial attraction temporally unfolds during an interaction, nor account for mutual physiological or behavioral adaptations that take place when two people become attracted. Instead, recent theories on social bonding emphasize the importance of co-regulation during social interactions (i.e., the social coordination of physiology and behavior between partners), and predict that co-regulation plays a role in bonding with others. In a speed-date experiment of forty-six heterosexual dates, we recorded the naturally occurring patterns of electrodermal activity and behavioral motion in men and women, and calculated their co-regulation during the date. We demonstrate that co-regulation of behavior and physiology is associated with the date outcome: when a man and a woman synchronize their electrodermal activity and dynamically tune their behavior to one another, they are more likely to be romantically and sexually attracted to one another. This study supports the hypothesis that co-regulation of sympathetic and behavioral rhythms between a man and a woman serves as a mechanism that promotes attraction. Results Electrodermal synchrony during a date is associated with romantic interest. Synchrony in electrodermal activity between a man and a woman during a date significantly correlated with mutual romantic interest (multilevel model analysis: β = 2.83, p = 0.006; Pearson r = 0.51, 95% confidence interval = [0.23, 0.72], p < 0.001, BF 10 = 73.23). This effect is calculated on data combined from three runs 44 because it was replicated across three runs with different participants (Fig. 2). A bio-behavioral marker for date-outcome. In addition to electrodermal synchrony, we also assessed the motor attunement during the interaction using pixel-by-pixel video analysis and computed cross-correlations in body movements (see "Methods"). A multilevel model analysis shows that both electrodermal synchrony (β = 2.75, p < 0.001) and motor attunement (β = 4.39, p = 0.003) are significantly associated with mutual romantic interest. We then computed a bio-behavioral measure, combining electrodermal synchrony with motor attunement (henceforth marked as bio-behavioral coupling) using linear regression, to test the extent to which bio-behavioral coupling predicts the date-outcome (Fig. 3). Next, we applied a linear regression model with a leave-one-out procedure. We predicted the outcome of each date, by calculating a linear regression model that includes all other data points, and not the predicted date. The model used both electrodermal synchrony and motor attunement, in order to predict whether the partners were attracted to one another in each date. Our Figure 1. The experimental setting. A man and a woman meet for a speed date while their behavior and physiology are being recorded, providing 1200 physiological data points (sampled at 4 Hz) and 3000 behavioral data points per date for each subject (sampled at 10 Hz). The room design, as well as the ambulatory recording equipment, enabled participants to freely interact, for an ecologically-valid estimation of bio-behavioral measures, naturally occurring during a romantic date. After the date, participants rate their romantic interest and sexual attraction to the partner. We measured the association between physiological synchrony and behavioral attunement during a date, and the romantic and sexual ratings after the date. www.nature.com/scientificreports/ findings show that the combined measures of bio-behavioral coupling correctly predicted the outcome of 71% of the successful dates ('both interested') and unsuccessful dates ('no one interested') ( Fig. 3), which is significantly different from chance (H 0 ~ Binomial(21, 0.5), p = 0.013). This provides evidence for a role of electrodermal synchronization and behavioral attunement in romantic attraction. While synchrony is not a necessary condition for romantic interest-it is indicative of it: if bio-behavioral coupling during a date is high, the romantic interest of both dyadic partners during this date is high as well. The association between motor attunement by itself and mutual romantic interest showed a moderate correlation and was marginally significant (r = 0.233, p = 0.06). Bio-behavioral coupling explains the variance of romantic interest over and above physical appearance. We estimated the contribution of bio-behavioral coupling in explaining the variance in mutual romantic interest, over and above physical appearance using a multilevel model analysis. Bio-behavioral coupling significantly predicted the mutual romantic interest (β = 3.66, p < 0.007), when controlling for the random effects of both the run number and participant. Physical appearance of men marginally predicted the date outcome (β = 0.39, p < 0.069), and significantly in women (β = 0.52, p < 0.005), when controlling for the random effects of both the effects of run number and participant. Bio-behavioral coupling is also a significant predictor of mutual interest when using a linear model. When controlling for the physical appearance of women and men, both electrodermal synchrony (β = 8.00, p = 0.032) and motor attunement (β = 10.30, p = 0.019) are significant predictors of mutual romantic interest (partial correlation, Pearson r = 0.61, 95% confidence interval = [0.41, 0.80], p < 0.001, BF 10 = 37.29). Physical appearance on its own in both genders explains 24.7% of the variability (Pearson r = 0.50, p = 0.019). Computing a bio-behavioral marker of coupling significantly improves the linear regression model well beyond physical appearance to 56.4% (Pearson r = 0.75, p < 0.001, BF 10 = 1243.36). The contribution of bio-behavioral coupling in variance accounted for (ΔR 2 ) equals 26.7%, which is significantly different from zero (F (2, 21) = 6.43, p = 0.007). Gender differences. Having established a positive association between bio-behavioral coupling and mutual romantic interest, we further examined potential gender differences, by testing the preferences of women and men separately. . Moreover, since every subject participated in multiple dates, we computed the average electrodermal synchrony scores of each participant across all of their dates. Then, we tested if individual synchrony scores are associated with sexual attractiveness. A multilevel model analysis revealed a significant gender and synchrony interaction, which indicates that women are more attracted to synchronous men, than men are attracted to synchronous women (β = 3.12, p = 0.037). Similarly, Spearman correlation show that women are sexually more attracted to men who are better synchronizers (Spearman ρ = 0.68, 95% confidence interval = [0.24, 0.93], p = 0.004, Bonferroni alpha = 0.025, BF 10 = 7.55, Fig. 4). This effect was not evident in men (Spearman ρ = − 0.29, 95% confidence interval = [− 0.71, 0.16], p < 0.300, BF 10 = 1.127). www.nature.com/scientificreports/ Discussion Our findings demonstrate that bio-behavioral coupling during a date predicts romantic interest and sexual attraction in humans. This study shows that successful dates, which resulted in mutual romantic interest, are characterized by increased electrodermal synchrony and attunement of behavior. When a man and a woman are highly synchronous and attuned during a date, their mutual romantic and sexual interest are high as well. This provides evidence that sexual and romantic attraction in humans involve social adjustment of the sympathetic nervous system and motor behaviors. Moreover, the results point that individuals who are better at adapting their physiology and behavior to their partner during the date, are more likely to attract a partner. This suggests that adaptation of physiology and behavior to a partner during interaction can promote romantic bonding. Synchronization of activity in the sympathetic nervous system during human interaction could reflect a social effect on the regulation of arousal and affect (i.e., 'social regulation'). The ability to tap onto another human's physiology is a central evolutionary feature that supports survival in social animals 25 . Social regulation is vital in early-life, as infants rely on their caregivers to regulate most aspects of their metabolic processes, including energy expenditure, immunity, temperature and arousal [25][26][27] . Despite gradual weaning from the complete dependence on social regulation, adults' physiology and behavior are continually predisposed to be regulated by others, in particular-close others 37,[46][47][48] . Multiple physiological processes are socially regulated through synchronization among romantic partners, from autonomic processes such as arousal 35 , respiration and blood pressure [49][50][51] , through endocrine processes marked by plasma levels of hormones as cortisol and oxytocin 52,53 , to Moreover, a multilevel model analysis shows that both electrodermal synchrony (β = 2.75, p < 0.001) and motor attunement (β = 4.39, p = 0.003) are significant predictors of mutual romantic interest. With a leave-one-out linear regression classifier, the model correctly classified 71% of the successful dates ('both interested' , in yellow) and unsuccessful dates ('no one interested' , in red). Correctly predicted dates are marked with a black circle. (C) A linear regression showing the association between bio-behavioral coupling (i.e., electrodermal synchrony and motor attunement, x-axis) to ratings of mutual romantic interest (y-axis) (Pearson's r = 0.56; p < 0.001, BF 10 = 220.96). The dashed line depicts the threshold between successful and unsuccessful dates that enables optimal prediction of the date outcome, calculated with Receiver Operating Characteristic (ROC) curve (see "Methods"). www.nature.com/scientificreports/ neural processing measured with electroencephalogram (EEG) 29,54 and Functional Near-Infrared Spectroscopy (fNIRS) 55 . Through bio-behavioral social regulation, metabolism is optimized during interactions 25 . Thus, biobehavioral coupling during a date may be indicative for the efficacy of social regulation, marking an adaptive fit between a male and a female. While synchrony is a dyadic phenomenon, constructed in time by both partners, it is not equally associated with sexual preferences among men and women. This study reveals gender differences in sexual attraction: women desire high-synchrony men, more so than men desire high-synchrony women. Several studies report a gender-difference in the involvement of the sympathetic nervous system in sexual arousal, which is stronger in females 56,57 . This is in line with the stronger association found between women's sexual desire and synchrony in electrodermal activity, a sympathetic measurement. This gender difference suggests that classic evolutionary theories, which couple sexual desire with reproduction, seem more relevant to males, and underestimate the role of social interaction and cooperation in sexual selection 19,20 . In women and other female primates sexual desire is uncoupled from the reproduction, and sexual interactions occur throughout the menstrual cycle 58 . As such, the purpose of sexual and romantic desire in primates cannot be limited to fertility, and can serve an additional social purpose of bonding 59 . Whereas physical appearance is a well-studied feature in mate selection 15,16 , this study points to an additional mechanism, co-regulation of physiology and behavior, which impacts mate selection beyond physical appearance in both males and females. When assessing dyadic interaction, not only synchrony (i.e., the instantaneous matching between partners) is relevant for affiliation, but also the sequential impact partners have on each other, or attunement 37,60,61 . Being sensitized to the partner's behavioral cues, and attuned adjustment of behavior in response to those cues, are key features in social interaction, bonding and closeness 36,40,62 . Synchrony and attunement are complementary features in dyadic interaction, together enabling both simultaneous resonance and attuned responsiveness, and can maximize the social impact on the regulation of physiology during interaction. Indeed, it is the combination of both synchrony and attunement (measured here as 'bio-behavioral coupling') that best predicted the partners' attraction. This research found a positive association between electrodermal synchrony and motor attunement. www.nature.com/scientificreports/ Future research is needed to test whether interactive flexibility serves as a behavioral mechanism to achieve physiological synchrony. Previous research on speed dating characterized verbal and nonverbal parameters that are associated with attraction and romantic interest [63][64][65][66][67] . Yet, this is the first study to our knowledge that combines dynamic measures of naturally occurring behavior and physiology during a first date, to assess the theoretical idea that bio-behavioral adaptation to a romantic partner can serve as means for co-regulation and promote romantic and sexual attraction. The theoretical idea that bio-behavioral synchrony is a strategy for co-regulation, which promotes attachment has been extensively studied in the parent-infant bond 21,68 . Here we show evidence supporting a similar mechanism in romantic bonds, whereby high bio-behavioral coupling during a date is indicative that both partners are interested in each other. It is important to note that we cannot determine the causality direction between bio-behavioral coupling and romantic or sexual preferences. It is possible that bio-behavioral synchrony increases attraction, or that feeling attracted improves the ad-hoc motivation to synchronize. Another unknown is whether the extent of synchrony during a date reflects an a-priori compatibility between the partners, or an individual "synchrony-aptitude" that enables to adapt to the partner in order to attract them. The study design, in which each participant met with multiple partners, provided evidence that some individuals tend to synchronize with their partner more than others, regardless of the partner and across multiple partners. Moreover, these synchronous individuals are found to be more attractive. This indicates that rather than a-priori compatibility, sexual and romantic attraction is potentially determined by the ability to adapt to the partner. Nonetheless, future mechanistic studies in adults are needed to assess the causal effects of synchrony and partner selection. Future studies are warranted to test these hypotheses in same-sex couples, and other cultures as well. In particular, given the gender differences, sexual and romantic attraction in women-women bonds and men-men bonds can shed light on the role of sexual and romantic desire as a potential mechanism of bonding, beyond the purpose of fertility. To conclude, our findings provide evidence associating biological synchrony and behavioral attunement with romantic and sexual attraction. This suggests that co-regulation of bio-behavioral rhythms during interaction might function as a mechanism for mate-selection in humans. Methods Participants. Thirty-two undergraduate students (16 women), with an age range of 21-28 (Mean = 24, SD = 1.7 years), participated in the experiment. Participants were recruited via the university online experiment system. All participants were native Hebrew speakers, heterosexual, cis-gender, single, and interested in a romantic relationship. Participants were remunerated for their participation. The experiment was approved by the ethical committee of the Faculty of Social Sciences of the Hebrew University of Jerusalem and performed in accordance with relevant guidelines and regulations. Each participant signed an informed consent form prior to participation. Procedure. The experiment was conducted in three experimental runs of sixteen dates, in which a male and a female met for a five-minute date. In the first and second runs, 4 female-participants and 4 male-participants were invited to the lab, resulting in 16 dates. The third run consisted of four rounds (2 female-participants and 2 male-participants in each round). Overall, the cohort includes 48 dates, and two dates were excluded from analysis because the partners were formerly acquainted. Dates occurred one at a time in a dedicated room. During the dates, we recorded the subjects' electrodermal activity at frequency of 4 Hz using an Empatica E4 wristband. Moreover, we video-recorded the dates and conducted an automated video-analysis to infer the motion of each partner during the date pixel-by-pixel, frame-by-frame at a frequency of 10 Hz. Acquisition of behavioral data from one date and physiological data from four dates was not completed due to equipment malfunction and were not included in the analyses. Thus, while all behavioral analyses were conducted on forty-five dates, all physiological analyses were based on forty-two dates, and physical appearance ratings were not provided by women in ten dates and men in six dates. Before and in between dates, each participant was assigned a private waiting room. While waiting, participants did not have access to their smartphones. Participants refrained from caffeinated and/or sugared food and beverages. Upon date onset, the experimenter escorted subjects to the date room. Immediately after each date, subjects returned to their private waiting room where they rated their romantic interest and sexual attraction to the partner, and their physical appearance. Sample size and power. When there are multiple sources of random variability in a design, the most accurate method of determining the power is simulations that capitalize on random effects revealed in actual data (see Finkel et al., 2015) 69 . Thus, to estimate the sample size, we ran an initial sample (run 1), and used a bootstrapped power calculation. In this run, each subject participated in 4 dates, giving a total of 16 dates. Using bootstrap sampling (n = 10,000), we found that adding two more rounds to our experiment (i.e., n = 48), would yield a statistical power of ~ 77%. Hence, we stopped data collection after three runs and excluded dates with no physiological data, leaving 42 valid dates. Additionally, according to previous studies on relationship science (Finkel et al., 2015) 69 , sensitivity analysis indicates that the detectable effect size with the current sample size is ~ 0.3 and above at alpha of 0.05 and power of 0.8 63,70 . Given the additional sources of variance due to the multilevel design (participant, date, run), we also estimated the statistical power of the entire sample across three runs by calculating the post-hoc power for the multilevel model using the powerSim function (from the simr package) in R 3.6.1 71 . This analysis revealed a power of 73% to find a significant effect of electrodermal synchrony on romantic interest (as reported in Fig. 2 www.nature.com/scientificreports/ Data acquisition and analyses. Behavior. We applied automated video analysis to infer the motion of each partner during the date 37 . From each video, we separated the images to two regions, each one containing one partner of the dyadic interaction. We then extracted the velocity of each pixel in each frame (using the optical flow algorithm 72 ), and summed their squared values over all pixels to assess the total motion at each moment of each partner during the date. Thus, for each participant we have a measure for the total motion he/she did during each time point in the date (sampled at 10 Hz). Physiology. During the five-minute date, we measured participants' electrodermal activity using Empatica E4 wristbands. Electrodermal activity refers to the continuous variation in the electrical characteristics of the skin. Typically, it is measured as skin conductance by applying a small, constant voltage to the skin. As the voltage is kept constant, skin conductance can be calculated by measuring the current flow through the electrodes. Electrodermal activity is controlled by the sympathetic nervous system and reflects the level of arousal and orientation of attention 42,43 . Electrodermal activity has been reported to be sensitive to social stimuli and reactive during social interactions 73,74 , and specifically between romantic partners 35 . Thus, electrodermal activity is a key target to assess autonomic synchrony in a romantic context. The E4 wristband is a wearable and wireless device. We placed the wristbands on the inner wrist of participants' right hand 75 , and recorded the electrodermal signal throughout the date. Then, to calculate electrodermal synchrony, the electrodermal signals from both partners were temporally aligned using a global timestamp, marking the beginning of each date 76,77 . The wristbands contain an electrodermal activity sensor with sampling frequency of 4 Hz, resolution of one digit -900 pSiemens, range of 0.01-100 μSiemens, and alternating current (8 Hz frequency) with a maximum peak to peak value of 100 μAmps (at 100 μSiemens). The obtained electrodermal signal was streamed via Bluetooth to an E4 App for iOS/Android for online control. Subjects were wearing the E4 devices for at least one minute prior to starting the date, to ensure proper sensor calibration 76 . Previous studies reported the use of Empatica E4 wristbands in behavioral experiments that measured electrodermal activity [78][79][80] . Important advantages of the E4 wristbands is that they are quick to connect and wireless. Hence, unlike electrode-based-devices that measure electrodermal activity, they do not interfere with natural behavior in experimental settings, enhancing the ecological validity of the obtained results. However, since this method is not the gold-standard for measuring electrodermal activity, we conducted a separate experiment to validate the E4 signal. Specifically, we compared the output from the Empatica E4 wristbands to the skin conductance output of an Atlas constant voltage system (0.5 V ASR Atlas Researches, Hod Hasharon, Israel). The Atlas system has been used in dozens of physiological experiments over the last 2 decades [81][82][83] . Importantly, our validation experiment is consistent with previous research validating the E4 device [84][85][86][87] , providing support for the validity of the Empatica wristbands (see Supplementary Figure S1, as well as Supplementary Results Section for a full description of the Empatica E4 validation experiment). See the Supplementary Results for detailed description of each date's electrodermal activity ( Figure S5) and motion data ( Figure S6), as well as descriptive statistics (Table S1). Computing synchrony. Synchrony was computed using Matlab and R 3.6.1 88 . We computed two dyadic measures that reflect the bio-behavioral interactive transactions between men and women during the date: Dyadic Synchrony and Dyadic Attunement. 1. Calculating dyadic synchrony-Synchrony reflects partners alignment of behavior or physiology at the same time (i.e., the cross-correlation at lag 0). For each date, we calculated dyadic synchrony in electrodermal activity and behavioral motion. Correlations were squared in order to account for interpersonal coordination (i.e., in-phase and anti-phase synchrony) 89 . 2. Calculating dyadic attunement-in addition to synchronization, we also assessed the motor attunement of partners to each other by testing who leads the interaction and who follows. To trace changes in leader-follower turn-taking, we divided each time-series into ten second windows with five seconds offset. Then, we computed the cross-correlation of motion in each window. The cross-correlation function indicates the level of correlation at different time-lags. Synchrony at positive time-lags indicates leadership of one partner, while synchrony at negative time-lags indicated leadership of the other partner. We then summed across all time-lags to indicate the dominance in leadership in that specific time-window (i.e., the center of mass of the cross-correlation function at that specific time-window; the cross-correlation function is given by: c(τ) = N−τ−1 n=0 E woman (n + τ)E man (n)/std(E woman )std(E man ) , where E woman (n) is the woman's motion and E man (n) is the man's motion at frame\sample n (see Fig. 5). The variance of this parameter, across all time-windows, indicates leader-follower turn-taking, meaning, the flexibility of motion leadership across the date and thus indicates the attunement of the partners in the date. Attunement was similarly calculated for the electrodermal activity data. Romantic ratings. After each date, each participant rated the date on three measures on a scale of one to five: romantic interest in the partner, sexual attraction to the partner, and the physical appearance of the partner. Thus, for each participant per each date, we calculated the personal scores of Romantic Interest in partner, Sexual Attraction to partner, and Physical Appearance of partner. Moreover, a dyadic variable of Mutual Romantic Interest was computed as the sum of Romantic Interest of both men and women. Romantic interest did not differ across the three experimental runs (see Supplementary Figure S2). We separated the dates to those where both or none of the men and women were interested (romantic ratings correlation between partners: rho = 0.69; p < 0.001), and where only men or only the women were interested (romantic ratings correlation between part- Testing the associations between synchrony and date ratings. The association between synchrony and romantic interest. To model the synchrony across the dates, we first computed the time course of synchrony using a sliding window. We found that successful dates are characterized by increased electrodermal synchrony in the first two minutes (see Supplementary Figure S3). Therefore, we computed electrodermal synchrony across the first two minutes and applied Pearson correlations to test the association between the dyadic measures and romantic interest. We computed bootstrapped confidence intervals with 1000 iterations. While the most significant differences were found in the first two minutes, calculating electrodermal synchrony across the entire 5-min is also significantly associated with romantic interest (see supplementary Figure S4). Analyses were corrected for multiple hypotheses testing based on four independent variables: Motion Synchrony; Motion Attunement; Electrodermal Synchrony; Electrodermal attunement. The predictive power of bio-behavioral coupling on date success. We applied linear regression, and a leave-oneout procedure 91 , to assess the predictive power of the bio-behavioral synchrony during the date on the romantic ratings after the date. Specifically, for each date, we calculated a linear regression model which includes all data points except that date. The regression model then uses electrodermal synchrony and motor attunement to predict the romantic interest score of the remaining date. This procedure was repeated twenty-one times (which equals the total number of successful and unsuccessful dates). The prediction scores were used to construct a Receiver Operating Characteristic (ROC) curve. This curve is commonly used in binary classification decisions. It describes the performance of our regression model across all discrimination thresholds, by plotting the true positive rates (i.e., proportion of dates where both partners were interested is classified as 'both interested') and false positive rates (i.e., proportion of dates where both partners were not interested is classified 'both interested'). We then selected the optimal threshold-i.e., the point on the curve which provides the best separation between successful dates ('both interested') and unsuccessful dates ('no one interested') and predicted the outcome of each date. The association between synchrony and sexual desire. Calculating a personal synchrony score. Since every subject participated in multiple dates, we computed the average dyadic coupling scores for each participant. We used Spearman correlation to test if high synchrony scores are associated with sexual attraction. Multilevel model analyses. Whenever mentioned in the main text, dyadic data were analyzed using multilevel models 44,70,92 . These models account for the statistical non-independence of the data points as individuals participated in several dates and belonged to three different runs. The models controlled for the random effects of recurring data from individuals that repeated in different dates and for their nesting in specific runs. For biobehavioral coupling, we used a model with fixed effects for bio-behavioral coupling and physical appearance, while accounting for the random effects for the intercept of recurring data from individuals that repeated in different dates and for their nesting in specific runs. See code for each calculation at https:// osf. io/ 5b246/. Bayesian analyses. To augment the classical statistical inference (i.e., correlations, linear regression, and multilevel model analyses), we also included Bayesian analyses and computed Jeffreys-Zellener-Siow (JZS) Bayes Factors (BFs). The default prior settings (used by R) were left unchanged. BF values ≥ 3 is considered to provide Figure 5. Cross-correlation values at different time-lags indicate dominance in leadership. In this graphical scheme, when summing across all time-lags in a cross-correlation analysis in a specific window, either the woman leads the dyadic interaction, or the man. When most of the mass of the cross-correlation function is on the negative side (left plot), the woman is the dominant leader in this time window. When most of the mass is on the positive side (right plot), the man is the dominant leader in the time window. We computed the dominance in leadership per each sliding window to assess the turn-taking in interactive leadership, for participants' motion and physiology. www.nature.com/scientificreports/ substantial to moderate support for the alternative hypothesis 93 . For all correlations, either the BF 10 (favoring the alternative hypothesis) or the BF 01 (favoring the null hypothesis) is reported. All plots were created using the ggplot2 package50 in R 3.6.1 45 . Data availability All data, as well as the analyses scripts, were uploaded to a public repository at https:// osf. io/ 5b246/ and can become available upon request. www.nature.com/scientificreports/
2023-02-23T14:16:26.541Z
2022-03-21T00:00:00.000
{ "year": 2022, "sha1": "0b863a86226009756ba9f9024d26d7d5dd9618cc", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-08582-6.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "0b863a86226009756ba9f9024d26d7d5dd9618cc", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [] }
270919484
pes2o/s2orc
v3-fos-license
Study of the genetic association between selected 3q29 region genes and schizophrenia and autism spectrum disorder in the Japanese population ABSTRACT Psychiatric disorders are highly inheritable, and most psychiatric disorders exhibit genetic overlap. Recent studies associated the 3q29 recurrent deletion with schizophrenia (SCZ) and autism spectrum disorder (ASD). In this study, we investigated the association of genes in the 3q29 region with SCZ and ASD. TM4SF19 and PAK2 were chosen as candidate genes for this study based on evidence from previous research. We sequenced TM4SF19 and PAK2 in 437 SCZ cases, 187 ASD cases and 524 controls in the Japanese population. Through targeted sequencing, we identified 6 missense variants among the cases (ASD & SCZ), 3 missense variants among controls, and 1 variant common to both cases and controls; however, no loss-of-function variants were identified. Fisher’s exact test showed a significant association of variants in TM4SF19 among cases (p=0.0160). These results suggest TM4SF19 variants affect the etiology of SCZ and ASD in the Japanese population. Further research examining 3q29 region genes and their association with SCZ and ASD is thus needed. INTRODUCTION Schizophrenia (SCZ), a neuropsychiatric disorder that affects approximately 1% of the worldwide population, 1 exhibits positive symptoms (delusions and hallucinations; so-called psychotic symptoms in which contact with reality is lost), negative symptoms (particularly impaired motivation, reduction in spontaneous speech, and social withdrawal) and cognitive impairment. 2 3q29 gene variants in SCZ and ASD SCZ is highly heritable, with rates ranging up to 80%. 3 Autism spectrum disorder (ASD) is a term used to describe a constellation of early appearing social communication deficits and repetitive sensory-motor behaviors associated with a strong genetic component as well as other causes. 4The heritability of ASD is estimated to range from 64% to 91%. 5 Even though SCZ and ASD are different disorders, they exhibit phenotypic similarities and genetic overlap.A particularly compelling example of overlapping genetic vulnerability is the high rates of ASD and SCZ seen in individuals with 22q11.2deletion syndrome 6 and 3q29 deletion syndrome. 7,8ecent studies have reported that individuals with 3q29 deletion syndrome have a high risk of developing SCZ and ASD. 9 The typical 3q29 recurrent deletion is 1.6 Mb in size and contains 22 protein-coding genes, including PAK2, TFRC, TNK2, APOD, HES1, OPA1, TM4SF19, MUC4, and DLG1. 10 The 3q29 recurrent deletion is characterized by neurodevelopmental and/or psychiatric manifestations, including mild-to-moderate intellectual disability, ASD, anxiety disorders, attention deficit/hyperactivity disorder, executive function deficits, graphomotor weakness, and psychosis/ SCZ. 11he 3q29 deletion confers a 41.1-fold increased risk for developing SCZ. 8 Several SCZ and ASD candidate genes have been implicated in this region, including DLG1, PAK2, 12 and FBXO45. 8,13In addition, TM4SF19 14,15 have been reported as possible ASD candidate genes.As 3q29 deletion plays a significant role in SCZ and ASD patients, we wanted to examine genes in this region may also affect the risk for developing SCZ and ASD.In previous studies, we examined DLG1 16 and FBXO45. 13Therefore, in this study, we examined TM4SF19 and PAK2 as candidate genes. The aim of this study was to investigate the relationship between variants in 3q29 region genes and SCZ and ASD.We conducted targeted sequencing of TM4SF19 and PAK2 and performed association analyses on rare missense variants. Samples The sample set included 437 SCZ cases, 187 ASD cases, and 524 healthy controls.All participants in our study were Japanese recruited by Nagoya University Hospital and its co-institutes and co-hospitals.Cases were diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders, fifth edition criteria for SCZ and ASD.Controls were evaluated using an unstructured interview to confirm they had neither personal nor family history of psychiatric disorders.We obtained written informed consent from all participants and/or their guardians.This study was approved by the Ethics Committees of Nagoya University Graduate School of Medicine and co-institutes and co-hospitals. Sequencing and data collection Genomic DNA was extracted from whole peripheral blood or saliva via standard protocols.We included only coding regions.The Ion AmpliSeq Library Preparation protocol was used to prepare a DNA library for the selected genes, and the Ion Xpress barcode adapter was used for each DNA library.Sequencing was performed on an Ion PGM Sequencer using Ion 318 Chip v2.Up to 48 barcoded libraries were loaded onto a single ion chip.All procedures followed the Ion Personal Genome Machine System-Reference Guide (revision: 2018.09).Sequencing data were analyzed using Ion Reporter software (plugins: coverage analysis, variant caller, file Gantsooj Otgonbayar et al exporter).Fisher's exact test (two-tailed) was used for calculating associations between samples and selected genes, with the threshold of significance set at p<0.05. Filtering conditions and in silico analysis We determined whether the identified variants were registered in the Exome Aggregation Consortium (ExAC), 17 the Tohoku University Medical Megabank Organization (ToMMo) 8.3KJPN, 18 the Genome Medical Alliance -Japan Whole Genome Aggregation (GEM-J WGA), 19 or the Human Genetic Variation Database (HGVD). 20The status of each variant was investigated using ClinVar. 21 However, 1 of the 3 variants was found in the control group.Four variants were not registered in any general databases, including ExAC, ToMMo 8.3KJPN, GEM-J WGA, and HGVD.None of 10 variants were reported in ClinVar (Table 2). DISCUSSION PAK2 is a ubiquitously expressed member of the p21-activated kinase family that plays a central role in regulating neuronal cytoskeleton dynamics.By regulating actin formation, PAK2 affects the morphology of synapses and the glutamate receptor complexes localized to synapses. 25 previous study conducted on Han Chinese ASD probands identified a rare de novo nonsense variant and inherited damaging missense variants in PAK2.12 Consistently, Pak2+/− mice were reported to exhibit autism-related behaviors, such as increased stereotypic behavior and reduced social interactions.12 The TM4SF19 gene encodes a protein that belongs to the four-transmembrane L6 superfamily.Members of this protein family are involved in several cellular processes such as proliferation, motility, and adhesion, in cooperation with integrins.26 In humans, the TM4SF19 is expressed at high levels in the parietal lobe, occipital lobe, hippocampus, pons, white matter, corpus callosum, and cerebellum.15 TM4SF19 was identified as an ASD candidate gene in a study conducted in the Chinese population (ASD, n = 536; controls, n = 1457).27 In this study, variants in TM4SF19 were significant among cases (ASD & SCZ) according to Fisher's exact tests.However, no significant associations were noted for PAK2 between cases and controls.This study is the first to sequence the TM4SF19 and PAK2 (3q29 region) in the Japanese population.The results of this study identified some variants (p.C152Y; p.I73T) not previously reported in ExAC, ToMMo 8.3KJPN, GEM-J WGA, and HGVD, suggesting that some of the variants identified in this study may be specific to the Japanese population. Ev though the function of TM4SF19 protein is unclear, a study published in Oncotarget in 2017 indicated that the C-terminus of transmembrane 4 L six family (TM4SF) proteins plays a significant role in various cellular functions, such as proliferation and migration.28 A similar mutation within/ near the C-terminus of TM4SF19 might also affect brain development (mainly expressed in the thalamus) in patients with psychiatric disorders. We identified only three variants in PAK2.This may be because PAK2 is a highly conserved gene with a probability of loss-of-function intolerance score of 0.94, which suggests that deleterious mutations in this gene are likely to be associated with disease.Therefore, a larger sample would be needed to identify disease-associated missense variants in such genes. LIMITATIONS The sample size (N = 1148; ASD, n = 187; SCZ, n = 437; Controls, n = 524) was too small to acquire strong evidence for any associations between variants in TM4SF19 and PAK2.Due to the small sample size, we grouped SCZ and ASD case as one group (ASD/SCZ vs Control).The 3q29 deletion region contains 22 protein-coding genes.In this study, we only choose to include TM4SF19 and PAK2, based on recent reports of associations with ASD and SCZ. 12,14 CONCLUSION This study is the first to sequence TM4SF19 and PAK2 in the Japanese population.Our results suggest that TM4SF19 variants may affect the etiology of SCZ and ASD in the Japanese population.However, further research involving larger sample sizes is needed to investigate the 3q29 region genes and their potential associations with SCZ and ASD. Fig. 1 Fig. 1 Locations of novel rare variants in TM4SF19 and PAK2 *The protein structure of TM4SF19 and PAK2 is based on the Human Protein Reference Database.**Each head indicate location at protein, amino acid change and number of patients which found in.***Each rectangular indicates domain region of protein.ASD: autism spectrum disorder SCZ: schizophrenia Table 1 Details regarding the identified missense variants Table 2 Frequency of each variant identified in this study, as shown by allele count
2024-07-04T05:07:44.526Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "f4bbe0d1bd348809858476e4ba0ca55a1a99d29e", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f4bbe0d1bd348809858476e4ba0ca55a1a99d29e", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252255392
pes2o/s2orc
v3-fos-license
Task-modality effects on young learners’ language related episodes in collaborative dialogue In adult learners’ collaborative dialogue, oral+written tasks have been found to promote a greater incidence and resolution of language-related episodes and to demand higher levels of accuracy than oral tasks thanks to the extratime learners have to reflect on their written outcome. No previous studies have tested whether asking learners to attend to accuracy in both modalities would yield similar results. The present study with twenty-three dyads of young English learners supports the superiority of the oral+written modality in the promotion of learning opportunities, even if learners are encouraged to focus on form in the oral modality, a result reinforced by the incorporation of target-like resolved episodes in the written product. However, the intragroup analysis reveals that young learners focus on meaning in equal terms, present low rates of target-likeness, and do not elaborate their resolutions, all of which can be ascribed to their younger age and developing metalinguistic awareness. Introduction Studies on collaborative dialogue, operationalized as language-related episodes (LREs) across task modalities (oral vs. oral+written), have received limited attention in the literature and the vast majority have examined adult learners (Adams & Ross-Feldman, 2008;Niu, 2009;Payant & Kim, 2019), except for García Mayo and Imaz Agirre (2019). All these studies have ascribed the greater incidence and higher number of resolved LREs in oral+written tasks to the extra processing time learners have to reflect on their production. However, no studies exist that have actually tested whether asking learners to attend to accuracy in both modalities would yield similar results. Likewise, very little empirical evidence exists as regards children's attitudes towards these tasks (Shak & Gardner, 2008). This study will try to fill these gaps by analysing the amount and types of LREs produced by primary-school Basque/Spanish bilinguals learning third language (L3) English in a Content and Language Integrated Learning (CLIL) setting in two tasks with different modalities: a speaking task and a speaking+writing task. In addition, this study will shed more light on learner attitudes by measuring their motivation before and after performing these tasks. Collaborative dialogue and Language-Related Episodes Collaborative dialogue is considered a source of language learning and development (Brooks & Swain, 2009;Donato, 1994;Swain & Lapkin, 2002;Watanabe & Swain, 2007), as during this dialogue learners may "form and test hypotheses about appropriate and correct use of language, as well as reflect on their language use" (Swain & Watanabe, 2013, p. 3). In particular, LREs are the unit of analysis to operationalize the construct of collaborative dialogue. Swain and Lapkin (1998) defined LREs as "any part of a dialogue where the students talk about the language they are producing, question their language use, or correct themselves or others" (p. 326). The production of LREs has been positively correlated with subsequent performance on tailor-made post-tests (Kim, 2008;Swain, 1998;Swain & Lapkin, 2001). Task-modality, which is the focus of the present paper, has been one of the factors that seems to affect the production, nature, and resolution of LREs. However, task-modality has received limited attention in the literature and the vast majority of studies have examined adult learners, as we will see in the next section. Task-modality and LREs Several studies have attested that some tasks draw learners' attention to form more than others. For example, in more structured tasks such as multiple choice and text repair, learners focus more on form than in less structured tasks such as dictogloss (Adams, 2006). Likewise, research dealing with task-based interaction has also examined the role of different production modes in language learning. It has been argued that speaking and writing impose unique cognitive demands (Payant & Kim, 2019): speaking is ephemeral and characterized by its immediacy, with very little planning and editing time for the learners, whereas writing is more visual and permanent and offers learners more time to think and more opportunities to reflect on their product. Likewise, in writing learners can more easily retrieve declarative knowledge due to additional processing time (Gass, Behney, & Plonsky, 2013, as cited in Payant & Kim, 2019). In other words, it aids in raising language awareness as it demands learners to express thoughts in a more precise way (Wolff, 2000). In addition, in classroom settings learners very often consider that teachers or peers could potentially assess their written products, which also leads to an increase in their orientation to form in writing (Adams, 2006). Adams and Ross-Feldman (2008) examined 44 high-intermediate ESL learners with L1 Spanish while collaboratively completing two tasks targeting past tense and locative prepositions. These tasks included speaking and writing components. Half of students performed the speaking and writing part simultaneously, while the remaining students carried out the speaking part in the first place and subsequently the writing part. LREs were codified in terms of their focus (meaning vs. form), complexity of focus (complex vs. simple), directness of focus (direct vs. indirect) and resolution (resolved vs. unresolved). The overall analysis of the results indicated that in the case of locatives, learners significantly produced more LREs in the writing component. As regards types, learners also produced more form and complex LREs as well as direct and resolved LREs, even though not significantly. In the case of past tense, the different categories analyzed did not yield statistically significant differences. However, the descriptive statistics showed a greater number of LREs, as well as more form, complex, direct, and resolved LREs in the writing part. When comparing dyads doing the tasks simultaneously to those doing them sequentially, no significant differences emerged in either category, except for resolution in favor of dyads performing the tasks sequentially. In sum, task-modality influenced the amount of LREs, while the order of administration affected the resolution of LREs. In an English as a Foreign Language (EFL) setting, García Mayo and Azkarai (2016) explored the effect of task-modality on the incidence, nature, and outcome of LREs produced by Basque-Spanish bilingual learners of L3 English. Same-proficiency dyads were asked to perform four different tasks: picture placement and picture differences constituted the speaking modality tasks and dictogloss together with text editing the speaking+writing modality tasks. In terms of incidence, a greater number of LREs were observed in the writing tasks. As for the nature of LREs, those LREs dealing with form were more common in writing, while those dealing with meaning were more common in speaking. In the case of outcome, writing tasks yielded a higher number of resolved LREs. Niu (2009) also examined the production of LREs by upper intermediate EFL learners in a Chinese university. Eight same-gender pairs were asked to perform a textreconstruction task. These dyads were randomly arranged to complete the task either as a collaborative oral output task or as a collaborative written output task. Those performing the collaborative written output task were found to initiate more LREs related to lexis, grammar and discourse than those doing the oral output task. Likewise, learners in the written output task provided more justifications and explanations while discussing the language forms they focused on. Similarly, learners in both the oral output and the written output condition made correct decisions while resolving the LREs. Payant and Kim (2019) tested L1 Spanish intermediate university learners in Mexico while performing two decision making tasks with oral and written components in L3 French. Similarly, tailor-made post-tests based on the LREs produced during each component were administered to the learners. Modality effects were visible in the higher production of LREs and in the more target-like resolutions during the written modality. However, the focus of LREs was partially affected by task-modality, as even if meaning-based LREs were more common in the speaking component, the ratio of both lexis-and form-based LREs was comparable. Additionally, the incidence of resolutions facilitated language development in the post-test even if learners did not resolve in a target-like manner. As aforementioned, research conducted with young learners along these lines is in its infancy. García Mayo and Imaz Agirre (2019) gathered data from sixty-two 6 th primary school children from three different intact classes. They were asked to complete two tasks: an oral task and an oral+written task. Both were decision-making tasks, but the second one required learners to submit a written product. As in the case of research conducted with adults, more LREs were produced by 6 th year primary school learners in the speaking+writing modality. As regards the nature of LREs, lexical LREs were more common than form in both the speaking and the speaking+writing task. The authors argue that it seems as if these young learners were needed to produce more lexical LREs to move both tasks forward. With respect to the resolution of LREs, as in research with adults, a higher percentage of resolved LREs was obtained in the speaking+writing task. The review of studies on the impact of task-modality on collaborative dialogue indicates that in general terms bi-/multilingual learners have been reported to produce and resolve a greater number of LREs in those tasks that combine oral and written modalities (Adams & Ross-Feldman, 2008;García Mayo & Azkarai, 2016;Niu, 2009;Payant & Kim, 2019). This trend has also been very recently attested in young learners (García Mayo & Imaz Agirre, 2019), but this line of research is in its infancy and more investigations are needed in this respect. All these studies have ascribed the greater incidence and higher number of resolved LREs in speaking+writing tasks, as well as the existence of greater accuracy in these tasks, to the extra processing time learners have to reflect on their production that this type of tasks inherently offers. However, no studies exist that have actually controlled for the different levels of accuracy that speaking and speaking+writing tasks demand as a consequence of their on-line and off-line nature, respectively. Thus, this study will look into whether the framing of the task by asking learners to attend to accuracy in both modalities could overrule the inherent focus of the task (Philp, Walter, & Basturkmen, 2010). Task motivation Research has also examined learners' perception of the tasks they perform in order to learn a language. The relationship between tasks and motivation is suggested to be shaped by task engagement (Dörnyei & Kormos, 2000) rather than correlated with pretask value beliefs (Al Kahlil, 2016). In turn, task engagement has been found to be linked to factors related to task conditions such as topic choice and cognitive complexity (Poupore, 2013) or the opportunities for interaction and collaboration with peers. In general, collaborative tasks are perceived to be more engaging than individual tasks (Julkunen, 2001;Kopinska & Azkarai, 2020), whereas learner grouping modes (pairs vs. groups) in collaborative tasks do not seem to modify task motivation substantially (Calzada & García Mayo, 2020;Fernández Dobao & Blum, 2013). Overall, learners report that collaborative tasks help them to construct knowledge, boost their self-confidence and even learn grammar (Lin & Maarof, 2013), more positive attitudes being developed as learners gain familiarity with the task (Kopinska & Azkarai, 2020;Shak, 2006;Shak & Gardner, 2008) or if the collaborative experience with the other task performers is optimal (Chen & Yu, 2019). Besides, learners seem to perceive the blend of the written and the oral mode positively (Calzada & García Mayo, 2020), even though experimental research comparing different modalities is lacking. It is important to mention that most of the investigations on task motivation reported above have been conducted with adolescents, university students or adult learners. Research on task motivation with young learners is still scarce (Muñoz, 2017a). With the exception of Shak and Gardner (2008), task motivation studies with young learners have focused on the exploration of just one task, namely a dictogloss task (Calzada & García Mayo, 2020;Kopinska & Azkarai, 2020;Shak, 2006). Learners in Shak's (2006) study did not enjoy the writing stage of this task probably because they performed that part individually. Conversely, participants in Calzada and García Mayo (2020) and in Kopinska and Azkarai (2020) felt more motivated as a consequence of their reported positive attitudes towards the collaboration and the opportunities for peer assistance that the dictogloss task fostered as well as the blend of the oral and the written modality that they experienced. Taking into account the lack of research on the interface between motivation and the examination of other tasks as well as taskmodality, we are in the need of further studies that come to fill this gap. It is important to know if tasks actually appeal to young learners, as very little empirical evidence exists in this respect (Shak & Gardner, 2008). Looking into students' attitudes before and after performing the tasks will shed more light on the feasibility and effectiveness of this type of tasks in primary education classrooms. Research questions Given the scarcity of research along these lines with young learners, this study will analyse the amount and types of LREs produced by primary-school Basque/Spanish bilinguals learning L3 English in a CLIL setting when performing a speaking and a speaking+writing task. In particular, we address these research questions: 1. Are there any differences between task-modalities in terms of number, types, and outcome of LREs? 2. What are the most common types of LREs (nature and outcome) in each task? 3. How does the resolution of LREs affect the written product in the speaking+writing task? 4. Are there any differences between the two tasks in terms of student motivation? Participants Fifty primary school learners took part in the study. (Cenoz & Valencia, 1994). The CLIL programme, in turn, was an attempt to further improve these learners' foreign language (English) proficiency. As far as English exposure at school is concerned, all the participants had begun learning English as a school subject at about age 4 in pre-primary education. Some years later, when they were in primary education Instruments Data were collected by means of two instruments where students had to collaborate in same-proficiency pairs, namely a speaking task and speaking+writing task. The speaking task was adapted from an activity taken from the book Sparks 1 (House & Scott, 2009) and it consisted of two different phases. In the first phase, students had to arrange six disordered pictures into a meaningful story. The second phase asked students to tell the story in turns. Picture-ordering plus story-telling tasks have been profusely employed in prior investigations with adults and children for similar purposes The speaking+writing task was specifically designed for the purposes of this study and it also consisted of two different phases. In the first phase, students had to look at two different pictures -the first one shows a boy who has found a lost dog in a park and the second one shows the potential owners of the lost dog, their professions, and a city map with the places where they work. Students had to decide who the owner of the dog is on the basis of some clues, namely a picture on this person's shirt. They also had to guess where the owner of the dog worked on the map. In the second phase, students were asked to write a note for the boy, explaining who the owner of the dog is and why, and also giving directions from the park to the place where the owner works so that the boy knows how to take the dog back to its owner. Similar decision-making tasks have been administered in previous research for similar purposes (Azkarai & García Mayo, 2015; García Mayo & Imaz Agirre, 2019) 1 . A third instrument was added to this study in order to obtain information about students' motivation towards each of the two collaborative tasks described above at two different times -immediately before and immediately after the completion of each task. The instrument was designed according to the scale used in a previous study measuring learners' perceptions across different times (before, during or after task completion), namely Al-Kahlil's (2016) 'task-related motivation thermometer', where motivation was measured by means of a Likert scale where 1 point meant the lowest motivation and 10 points the highest motivation. The purpose of this instrument was to gain face validity in our research and to make sure that students were motivated at the outset of the study so that they could give their best when performing the collaborative tasks. We also wanted to know whether the tasks were motivating to them, which will eventually inform about the suitability of this type of tasks for primary education students, in addition to discovering any task-modality differences with regard to motivation. Besides, as mentioned in the previous section, all participants were tested on English proficiency through their participation in the listening, reading, and writing sections of the Key English Test (KET; UCLES, 2014) at the outset of the study. This exam is proof of one's ability to communicate in English in simple situations. Data collection Once parental and school permission was issued, students were first tested on English proficiency by means of the KET. The test was administered during regular lessons in class and students were givenone hour and 40 minutes to complete it. The two collaborative tasks were accomplished by the student pairs in a quiet room at school. Data were collected over a period of two weeks in two different sessions. They performed the speaking task inthe firstdata-gathering session and the speaking+writing task inthe next session. Before the completion of each task, students completed the motivation scale individually, for which they were given 1 minute, and were subsequenly reminded of the importance of their paying attention to accuracy in the second phase of each task, that is, in the story-telling and note-writing respectively. Taking into account that writing allows for higher levels of accuracy due to the off-line nature of the task when compared to speaking (Adams & Ross-Feldman, 2008; Azkarai& García Mayo, 2015;García Mayo & Azkarai, 2016;Niu, 2009;Payant & Kim, 2019), by asking learners to attend to accuracy in both modalities, we could verify whether the framing of the task rules out the inherent focus of the task (Philp, Walter & Basturkmen, 2010). It is also important to note that a researcher was with the students in the room where the tasks were being carried out, but participants were asked to perform the task with their resources at hand and to avoid asking the researcher for help. On average, learners needed about 15 minutes to carry out each collaborative tasks. Once they had finished each task, they were given one minute to complete the motivation scale again, which they had to fill out individually. Data analysis The collaborative tasks were both audio and videotaped. Recorded productions were orthographically transcribed and later codified into CHILDES (MacWhinney, 2000) for the production of LREs with the help of CLAN protocols. All turns in which students engaged in language discussion or self-correction were identified as LREs in each task by two independent researchers, who jointly came to an agreement in those cases in which any controversy in their classification of the LREs was detected. As for the classification of the LREs in eack task, we firstly followed Adams and Ross-Feldman (2008) and García Mayo and Azkarai's (2016) classification in terms of nature, that is, LREs were classified into two main categories, namely meaningfocused and form-focused. The former includes cases of word meaning or word choice whereas the later comprises those LREs involving spelling, phonology, morphosyntax,and prepositions. Secondly, the aforementioned authors' taxonomy of LREs according to outcome was considered and each of the main categories of LREs were secondly classified as resolved or unresolved. The former include those cases in which the LRE reached a resolution, regardless of whether it was resolved in a targetlike or non-target-like manner, categories which were added to our LRE classification according to Payant and Kim's (2019) distinction between correctly and incorrectly resolved LREs. The latter refered to those cases in which the language concern was left unresolved and no answer to the linguistic inquiry was provided by any of the members of the dyad. Thirdly, in the case of the task including a written product, we added an original classification in which resolved LREs were further classified into incorporated and non-incorporated, depending on whether the resolution that dyad members reached was incorporated (or not) in the eventual written outcome of the task. (1) Unresolved meaning-focused LRE. *CHI1: she's make eh another toy but eh (.) eh (.) mejor (Eng: 'better') how do you say mejor? *CHI2: I don't know. (whispering) In (1), the first child asks the second child how the Spanish word mejor is said in English, but the second child does not know, leaving the question unanswered. *CHI1: is the same picture. *CHI2: is the same snake? *CHI1: is the same snake [Written output: 'is the seim snake in the two pictures'] In (2), the second child corrects the first child and changes the word picture for a more precise word defining what is in the picture, that is, a snake. The first child immediately incorporates the word snake in the next turn, which is also included in the final written output. *CHI1: eh serpiente (Eng: snake) *CHI2: the snake *CHI1: the snake *CHI2: the snake *CHI1: is here In (3), the first child produces a word in Spanish (serpiente) and the second child gives him/her the English term (snake) in the next turn. The word is accepted by the first child immediately after, but then it will not appear in the eventual written output. (4) Incorporated non-target-like resolved meaning-focused LRE. In (4), the first child wants to know how Spanish después is said in English. The second child provides an unintelligible word, so the first child decides first to adapt this Spanish word to English phonologically and attempts the foreignised term dispos, but then he decides to borrow the term from Basque and uses the term gero. The Basque word is finally accepted by the second child in the last utterance and it will be incorporated in the written outcome eventually. In (5), the two children are discussing how to refer to the male character in the story, the terms child, children, and boy being entertained. They finally opt for the inaccurate phrase 'a children boy', which will not ultimately be incorporated in the written output. and front of (.) eh church mm ha eh is the (.) vet eh clinic. In (6), the first child does not know how to pronounce the word 'church' and informs about it to his partner, who advised him to just write it and forget about its pronunciation. (7) Incorporated target-like resolved form-focused LRE. (7), the first child asks his partner how the English word 'dog' is spelt. The second child provides him/her with the right orthographical transcription of the word, which will also be part of the final written output. (8) Non-incorporated target-like resolved form-focused LRE. *CHI1: we puts the name in your names? *CHI2: we puts we put our? In (8), the first child incorrectly produces the verb form 'puts' for the first plural person 'we'. The second child corrects the first child and provides the accurate form 'put' without the 3 rd person singular present tense morpheme. Neither form will appear in the written outcome of the task. *CHI1: then you have to go. *CHI2: you have to eh. *CHI1: xxx to go of eh at garden road [Written output: 'then you have to go at garden road'] In (9), the first child is not sure about the right preposition to be used with the verb to go. She first attempts the preposition 'of' inaccurately but immediately after she selfcorrects herself and opts for the wrong preposition 'at', which will be incorporated in the written outcome. In (10), the first child suggests the preposition 'in' but the second child puts this suggestion into question although he later accepts it in his utterance 'is on the vet clinic'. However, the first child changes to 'on' in the following turn, a preposition which is finally accepted by the second child with an 'ok' in his last utterance. Neither preposition will be used in the written output eventually. As far as statistical analyses are concerned, we computed both descriptive and inferential statistical procedures. As for descriptive statistics, the number of turns, the number of turns comprising LREs, the number of LRE types as well as their percentages, mean scores, and standard deviations were calculated. As far as inferential statistics is concerned, the LRE data were analysed to compare both the two tasks (intertask analyses) and the different LRE types within each task (intra-task analyses). As for student motivation analyses, means and standard deviations were obtained for each task at both the pre-task and the post-task phase. Motivation data were also explored through inferential statistics for both inter-task and inter-phase differences. All comparisons were made by means of non-parametric procedures (Wilcoxon signed-ranged tests), as Kolmogorov-Smirnov tests indicated that the distribution of the samples was not normal. An alpha level of .05 (*) was used for significant probability. Results In this section, we will show the results of the analyses performed to find answers to the four research questions. Tables 2 to 9 present the inter-task analyses conducted to explore the differences between the oral task and the oral+writing task in terms of number, types, and outcomes of LREs (RQ1). Intra-task analyses are also offered in Tables 3 to 9 so as to discover which types of LREs in terms of nature and outcome are the most common in each task (RQ2). The analyses carried out to discover whether the resolution of LREs appears in the written product (RQ3) are displayed in Tables 10 to 12. Finally, Table 13 offers the results pertaining to students' motivation. As shown in Table 2, the data of our study was composed of 1197 turns in the oral task and 2236 in the oral+written task, of which 404 and 729 were turns comprising LREs respectively. As for the number of LREs, 110 episodes occurred in the oral task whereas 158 happened in the oral+writing task. In other words, learners' productions in general as well as the incidence of LREs in particular were more abundant in the oral+writing task. Mean scores revealed the very same pattern, since the oral task yielded a mean of 4.78 LREs per dyad and the oral+written task a mean of 6.83. The Wilcoxon test determined that the gap between the two mean scores was statistically significant. Table 3), inter-task analyses indicated that there were no statistically significant differences between the meaning-focused LREs produced in the first and in the second task, as attested by their similar mean score figures (4.30 and 3.74). However, when the form-focused LREs produced were examined in each task, larger differences were found between the two means, the oral-written task leading to a significantly higher incidence of this type of LREs than the oral task (3.09 vs. 0.48). As for the intra-task comparisons, it is noteworthy to mention that the dyads did not behave alike in both tasks. In the oral task, there was a significantly higher occurrence of meaning-focused LREs than of form-focused LREs, with means of 4.30 and 0.48 representing 90% and 10% of the total number of LREs produced respectively. In the oral-written task, we found a more even distribution of meaning-focused and formfocused LREs, (55.06% and 44.94%) as well as a small gap between the mean scores for these two types of LREs (3.74 and 3.09). The Wilcoxon test did not uncover significant differences between the two types of LREs in the oral+written task. As for the outcome of the LREs, Table 4 displays the figures for all resolved and unresolved LREs in each of the two tasks. Inter-task comparisons demonstrated that there were significant differences between the mean scores of resolved LREs in the two tasks, the oral-writing task contributing to higher resolution means than the oral task (6.17 vs. 3.70). However, no statistically significant differences were attested between the two tasks as far as unresolved LREs are concerned. As for the intra-task contrasts, Wilcoxon tests pointed to significantly higher means of resolved LREs than of unresolved ones in both the first (3.70 vs. 1.09) and the second task (6.17 vs. 0.65). Besides, the percentage of resolved LREs was much higher than that of unresolved ones, more so in the oral+writing (90.51% vs. 9.49%) than in the oral (77.27% vs. for meaning-focused and form-focused LREs independently. As for meaning-focused LREs (Table 5), inter-task comparisons revealed that there were no statistically significant differences between resolved LREs in the first and in the second task, with similarly high percentages (74.75% and 86.21%) and the same mean score (3.22) achieved. Nevertheless, the gap between unresolved meaning-focused LREs in the oral and in the oral+writing task did reach statistical significance. Unresolved LREs represented 25.25% of meaning-focused LREs in the oral task, with a mean of 1.09, whereas this percentage decreased to 13.79% in the oral-written task, with a mean of 0.52. In other words, unresolved meaning-focused LREs occurred to a significantly larger extent in the oral task. As far as the intra-task contrasts are concerned, both percentage and mean figures indicated that the production of resolved meaning-focused 22.73%) task. LREs was more prominent than that of unresolved ones. As attested by the Wilcoxon tests, the gap between resolved and unresolved meaning-focused LREs was statistically significant in both tasks, even so more strikingly in the oral+writing task. Table 6. In the inter-task analyses, it was observed that resolved form-focused LREs obtained a significantly higher mean in the oral+written task than in the oral task (2.96 vs. 0.48). However, the percentages and mean scores of unresolved form-focused LREs were extremely low, the difference between both tasks not reaching statistical significance. As for the intra-task comparisons, the pattern discovered in both tasks was that of a production of resolved form-focused LREs which was, by far, significantly more abundant than that of unresolved form-focused LREs, particularly in the case of the oral+writing task. The second type of analysis carried out to inquire the outcome of LREs involved whether these were resolved in a target-like manner. Table 7 displays the figures for all target-like and non-target-like resolved LREs in each task. When inter-task comparisons were made, it was discovered that the mean of target-like resolved LREs in the oral+writing task was significantly higher than the one in the oral task (4.65 vs. 2.04). However, as indicated by inferential statistics, there was no statistical support for the difference between the means of non-target-like resolved LREs in each task. Intratask contrasts, in turn, showed that the occurrence of target-like resolved LREs in the oral task was not statistically different from that of non-target-like resolved LREs. In the oral-writing task, however, Wilcoxon tests revealed that the target-like category mean was significantly superior to that of the non-target-like one (4.65 vs. 1.62), distribution percentages showing the very same tendency too (75.53% vs. 24.47%). (Table 8), it was observed that inter-task comparisons reached statistical significance only in the case of LREs resolved in a non-target-like manner, with learner dyads producing significantly more meaning-focused resolved LREs of this type in the oral task (1.57) than in the oral+written task (0.74). No differences between the two tasks were found for the meaning-focused LREs which were resolved in a target-like manner. As regards intra-task comparisons, learners did not behave alike either. The distribution of target-like and non-target-like resolved meaning-focused LREs in the oral task was quite even, with similar percentages (51.35% and 48.65%) and mean scores (1.65 and 1.57), no statistically significant differences being found between target-like and non-target-like LREs. However, the distribution of these two types of meaning-focused resolved LREs was more dissimilar in the case of the oral+written task, with 77.33% and 22.67% percentages in target-like and non-targetlike categories respectively. Besides, the difference between the means found statistical support in the Wilcoxon test, which pointed out to a significantly higher mean in targetlike than in non-target-like resolved meaning-focused LREs (2.48 vs. 0.74). As for the accuracy analyses of resolved form-focused LREs (Table 9), Wilcoxon tests comparing the two tasks revealed that both target-like and non-target like categories were significantly more productive in the oral+writing task than in the oral task, this difference being statistically more marked in the case of target-like resolved formfocused LREs. As far as intra-task comparisons are concerned, learners behaved differently in each task. Even though in both tasks, the percentage of form-focused LREs which were resolved in a target-like manner was higher than that of those resolved in a non-target-like way, the accuracy difference was found to be statistically significant only in the case of the oral+written task, with more form-focused LREs being resolved in a target-like than in a non-target-like manner (2.17 vs. 0.78). Inferential statistics did not find accuracy differences for resolved form-focused LREs in the oral task, though. To sum up the analysis of the results as regards incidence, nature, resolution and outcome of the resolution, the speaking+writing task yielded a greater number of LREs, as well as more resolved LREs and accurate resolutions. In terms of the nature of LREs, while meaning-focused episodes were more common in the oral task, a similar rate of meaning-and form-focused LREs was obtained in the speaking+writing task. Apart from the data on LRE accuracy previously reported, we looked into the rate of target-likeness in each task by calculating the rate of target-like meaning-focused and form-focused LREs over the total number of LREs. In the oral task, target-like meaning-focused resolved LREs represented 34.55% of the total number of LREs. In the oral-writing task, meaning-focused LREs resolved in a target-like manner accounted for 36.71% of the total number of LREs. As for the rate of target-likeness of formfocused LREs, the oral task contributed to 8.18% of LREs resolved in a target-like fashion over the total number of LREs. In the oral+writing task, target-like resolved form-focused LREs stood for 31.65% of the total number of LREs. The last type of LRE analysis carried out involved the oral+writing task only. We inspected whether the outcome of the LREs resolved in a target-like and non-targetlike manner by the learner dyads in their oral interaction appeared in the final written product of this task. In this regard, we distinguished between two categories of resolved LREs, namely incorporated and non-incorporated LREs. We will show the results for all resolved LREs as well as for meaning-focused LREs and form-focused LREs independently. As for all resolved LREs (Table 10), the comparison between the incorporated and non-incorporated categories turned out to be statistically significant, with a higher mean and percentage in the former than in the latter (4.87 vs. 1.30; 79.02% vs. 20.98%). When the analyses were carried out for target-like and non-targetlike resolved LREs separately, the Wilcoxon tests revealed that the superiority of the incorporated category over the non-incorporated one was statistically significant in the case of target-like resolved LREs only (3.96 vs. 0.70; 85.18% vs. 14.82%). Non-targetlike resolved LREs were more similarly distributed in terms of incorporation, with approaching means and percentages in the incorporated and non-incorporated categories. Regarding the analyses of incorporation of resolved meaning-focused LREs, the data in Table 11 shows that resolved meaning-focused LREs are incorporatedmore often than non-incorporated, with a higher mean and percentage in the former than in the latter category (2.48 vs. 0.74; 77.33% vs. 22.67%). These differences were significantly supported by the Wilcoxon test. The very same statistically supported pattern was found for target-like resolved meaning-focused LREs, with a mean of 2.00 incorporated LREs representing 81.03% of all target-like resolved meaning-focused LREs as opposed to a mean of 0.48 non-incorporated LREs representing 18.97%. Nonetheless, no differences were found between incorporated and non-incorporated categories in the non-target-like resolved meaning-focused LREs. As far as the incorporation of resolved form-focused LREs (Table 12), it was found that incorporated LREs were much more frequent than non-incorporated ones, as attested by the differences in the means (2.39 vs. 0.57) and percentages (80.88% vs. 19.12%), which the Wilcoxon tests found to be significant. When the incorporated vs. nonincorporated comparisons were made for target-like and non-target like resolved formfocused LREs independently, some differences were attested. Target-like resolved form-focused LREs were incorporated significantly more often than not (1.96 vs. 0.22; 90% vs. 10%), whereas no differences were found for non-target-like form-focused LREs in terms of incorporation. Finally, the analyses carried out to find out whether there are any differences as far as the motivation which students felt when accomplishing each task are offered in Table 13. Since motivation data were gathered at two different times, that is, immediately before and immediately after the completion of each task, a comparison of student motivation at both times was made. Results indicated that at the pre-task stage students were quite motivated to take part in the study, as evidenced by mean scores of 7.80 and 8.07 (out of a maximum of 10) in task 1 and 2, respectively. When these means were compared to the ones students gave at the post-task stage, Wilcoxon tests indicated that there were significant differences between both times in both tasks, students showing a higher motivation at the post-task than at the pre-task stage. In other words, students' engagement in the tasks resulted in their increased motivation, from 7.80 to 9.36 in the oral task and from 8.07 to 9.54 in the oral+writing task. Additionally, inferential statistics comparing the motivation means of the two tasks was performed. No statistically significant differences were found between students' motivation levels for each task either before (7.80 vs 8.07) or after (9.36 vs. 9.54) their accomplishment. Discussion In this section the four research questions of the study will be answered. As for the first research question (Are there any differences between tasks in terms of number, types, and outcomes of LREs?), the oral+written mode promoted a greater number of LREs, and in particular, more form-focused LREs, as well as more resolved LREs and more correctly resolved LRES, a finding in line with previous research with both adults (Adams & Ross-Feldman, 2008;García Mayo & Azkarai, 2016;Niu, 2009;Payant & Kim, 2019) and young learners (García Mayo & Imaz Agirre, 2019). Speaking+writing tasks offer learners more time to think and to reflect on their written products, so more learning opportunities operationalized as LREs emerge. Likewise, both unresolved meaning-focused LREs and non-target-like resolved meaning-focused LREs were more frequent in the speaking task (see also Payant & Kim, 2019 in this respect). The oral task is a more immediate task, with very little time for planning and editing and implies greater cognitive load than writing (see Grabowski, 2007;Granfeldt, 2007 in this respect) as: (i) in speaking, the information produced must be maintained exclusively in memory, while in writing, the already written text can be re-read; (ii) speaking is faster than writing; (iii) cognitive resources can be used for a longer period of time in writing; (iv) speaking requires continuous progress, whereas language production in writing is self-determined, the writer being able to stop the grapho-motoric process and to concentrate only on retrieval or on the planning process (Kuiken & Vedder, 2012, p. 365-366). The oral task, thus, may pose more lexical difficulties that need to be overcome by these learners in order to move the task forward. Besides, these learners' low proficiency and their young age might prevent them from resolving the meaningfocused LREs in a more target-like way. Unlike adults, these young learners might benefit even less from the availability of cognitive resources during a shorter period of time in the oral mode, which could help them solve their lexical gaps. This claim is also reinforced by the evidence available from previous studies on negotiation of meaning carried out in an ESL setting that have compared children and adults (Oliver, 1998(Oliver, , 2009).Children are able to negotiate but at different rates. Likewise, studies on the provision of feedback with young learners have reported that children interact but not in a way that promotes accuracy (Lyster, 2001). With respect to the second research question (What are the most common types of LREs (nature and outcome) in each task?), in the speaking task, meaning-related LREs were more common, even though learners had been requested to attend to accuracy. Thus, the inherent focus of the task overrules the framing of the task (Philp, Walter, & Basturkmen, 2010). To satisfy the demands of this more immediate and communication-oriented task, they need key vocabulary to move it along (García Mayo & Azkarai, 2016;Payant & Kim, 2019;Swain & Watanabe, 2013). In terms of resolution, there were more resolved than unresolved LREs both in the case of meaningand form-related episodes. However, this task did not yield more target-like resolutions in either condition. The on-line nature of this speaking task together with its primary focus (communication of meaning) may limit opportunities for greater accuracy. In addition, when compared to adults and adolescents from other investigations (Lasito & Storch, 2013;Niu, 2009), the rate of target-likeness seems to be even lower in this task, particularly in the case of form-focused LREs. This finding could be related to the young age of the participants of the present study. In this immediate and ephemeral task under communicative pressure (Adams, 2006), these young learners have even more difficulties to rapidly retrieve and verbalize their explicit knowledge which could help them solve their linguistic gaps in a more efficient way. Negotiating successfully could entail greater effort and might need more time to develop in this age range. We cannot forget these learners are immersed in a foreign language context in which cognitive maturity is key to succeed (Tellier & Roehr-Brackin, 2017). As learners gain cognitive maturity, metalinguistic awareness increases in foreign language contexts, which could aid them in negotiating more successfully. As for the speaking+writing task, the ratio of both lexis and form LREs was comparable, a finding which could be related to the low proficiency of the learners (García Mayo & Imaz Agirre, 2019; Leeser, 2004;Payant & Kim, 2019;Williams, 2001). These low-proficient learners seem to be in the need of key vocabulary to move the task forward (García Mayo & Imaz Agirre, 2019). But as in previous research with adult learners, they also attend to formal aspects in this task (Adams, 2006;Adams & Ross-Feldman, 2008;García Mayo & Azkarai, 2016;Niu, 2009;Payant & Kim, 2019). In other words, the addition of a written component increases learner opportunities to focus more on grammatical aspects (Adams, 2006). Likewise, this task yielded more target-like episodes both in the case of meaning-and form-related episodes, supporting previous research with adult learners (Adams, 2006;Adams & Ross-Feldman, 2008;García Mayo & Azkarai, 2016;Niu, 2009;Payant & Kim, 2019). Thanks to the extra processing time, learners can notice form and increase accuracy (Payant & Kim, 2019). In these conditions, young learners have more time to draw upon their explicit knowledge (Ellis, 2003) as adults do. Nevertheless, even if the rate of target-likeness in form-focused LREs is higher in the speaking+writing task, LREs are not so elaborated since learners usually resolve them by providing the relevant form without further explanations or justifications as observed in (11) and (12) (see Niu, 2009 for the classification of LREs in terms of elaboration in adult learners): (11) *CHI2: the dog go. *CHI1: goes. *CHI2: goes. %sit: CHI2 continues writing. *CHI2: eh first to the dental (.) clinic. The fact that the resolutions of LREs were not so elaborated as adults' could be explained by child learners' still developing metalinguistic awareness (Muñoz, 2017b). Additionally, we cannot dismiss the fact that primary education in Spain is characterized by a strong oral component and a marked emphasis on vocabulary (Muñoz, 2017 b). As an answer to the third research question (How does the resolution of LREs affect the written product in the speaking+writing task?), those episodes which were resolved in a target-like way were incorporated in the written product. Thus, language discussions geared towards accuracy do have a reflection in the final written product. This result is not fully comparable to previous findings, as to our knowledge, no investigations have been conducted analysing the relationship between the resolution of LREs and the written output of collaborative tasks. However, the finding obtained in the present study is in line with prior research that has established a relationship between accurate resolutions and language development in the posttests administered (Basterrechea & García Mayo, 2013;Payant & Kim, 2019). Even if more research along the same lines is timely, the evidence reported in the present study confirms the great potential of this speaking+writing task as far asthe contribution of LREs to language learning. As regards the last research question (Are there any differences between the two tasks in terms of student motivation?), the students were quite motivated before the completion of the tasks and their degree of motivation increased even further as a result of their engagement in the tasks. Likewise, no motivation differences were found between the two tasks neither at the pre-nor the post-task stage. These findings indicate that the type of tasks designed in our research were taken seriously by the children who participated in the study, perhaps because they perceived a connection between them and other educational activities they could be engaged in while in class (Mackey & Gass, 2005, p. 107).Both tasks turned out to be clearly attractive to young English learners, so teachers may be amenable to include them in primary education classrooms, more particularly the oral+written task given its superiority in terms of language learning opportunities, as attested by the LRE findings both in our study and in previous literature. As claimed by Hunt et al. (2005; as cited in Shak & Gardner, 2008), '[c]hildren will only persist in learning tasks if they see them as worthwhile' (p. 374). Conclusions This paper has contributed to the literature regarding the effect of task-modality on the production of LREs among young learners, a line of research still in its infancy. Although the two task modalities examined in this study were equally motivating to students, the examination of the LRE data indicated that task-modality had a strong effect on the occurrence and resolution of LREs, as a greater number of LREs and more resolved LREs were obtained in the speaking+writing task, a finding in line with previous research with adults. Although learners had been encouraged to focus on accuracy in the speaking task, the inherent focus of this task overruled the framing of the task. However, the nature of LREs was partially mediated by task-modality, as even if the speaking task promoted more meaning-focused LREs, the speaking+writing task yielded a similar rate of meaning-and form-focused LREs, a finding which could be ascribed to the low-proficiency of these young learners. In addition, unlike adults, the rate of target-like episodes was lower in the speaking task. Likewise, the type of resolutions of LREs attested in these young learners does not match the ones observed in the literature with adult learners (Niu, 2009), which are more elaborated. Thus, the enhancement of metalinguistic awareness through appropriate form-focused training conditions could lead to extended negotiation of form among young learners (see Bouffard & Sakar, 2008 for an example along these lines), and by implication, greater development of language accuracy in these minimal input contexts (Tellier & Roehr-Brackin, 2017). All in all, the present study supports the superiority of the speaking+writing task in the promotion of learning opportunities for young learners, a result reinforced by the incorporation of target-like resolved episodes in the written product. For future research, it would be convenient to investigate the effect of a wider range of tasks that could draw learners' attention to form more extensively by promoting the use of metalanguage and the verbalization of rules among learners. We also acknowledge thatthe type of tasks used in the speaking and in the speaking+writing modality also differ in type. The speaking task is a storytelling task, whereas the speaking+writing task is an opinion gap task. Follow-up studies should control for the nature of both tasks. Similarly, the investigation of the combination of different variables (i.e., age, proficiency, gender) and type of pairings would be advisable so as to offer learners the best learning conditions in this age period. Future studies should also consider the inclusion of tailor-made tests to measure actual learning gains. Credit author statement Both authors have participated in the conceptualization, design, analysis, writing, and revision of the manuscript.
2022-09-15T15:47:13.404Z
2022-09-12T00:00:00.000
{ "year": 2022, "sha1": "300848c714a1a18fcbd08f3b968d0e6a7af55c29", "oa_license": "CCBY", "oa_url": "https://repositorio.unican.es/xmlui/bitstream/10902/28130/3/TaskModalityYoung.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "3b59b9095580e2fdc29b5f0fe4167bde9fcd8572", "s2fieldsofstudy": [ "Linguistics", "Education" ], "extfieldsofstudy": [] }